WorldWideScience

Sample records for relative calculation error

  1. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  2. Calculating potential error in sodium MRI with respect to the analysis of small objects.

    Science.gov (United States)

    Stobbe, Robert W; Beaulieu, Christian

    2018-06-01

    To facilitate correct interpretation of sodium MRI measurements, calculation of error with respect to rapid signal decay is introduced and combined with that of spatially correlated noise to assess volume-of-interest (VOI) 23 Na signal measurement inaccuracies, particularly for small objects. Noise and signal decay-related error calculations were verified using twisted projection imaging and a specially designed phantom with different sized spheres of constant elevated sodium concentration. As a demonstration, lesion signal measurement variation (5 multiple sclerosis participants) was compared with that predicted from calculation. Both theory and phantom experiment showed that VOI signal measurement in a large 10-mL, 314-voxel sphere was 20% less than expected on account of point-spread-function smearing when the VOI was drawn to include the full sphere. Volume-of-interest contraction reduced this error but increased noise-related error. Errors were even greater for smaller spheres (40-60% less than expected for a 0.35-mL, 11-voxel sphere). Image-intensity VOI measurements varied and increased with multiple sclerosis lesion size in a manner similar to that predicted from theory. Correlation suggests large underestimation of 23 Na signal in small lesions. Acquisition-specific measurement error calculation aids 23 Na MRI data analysis and highlights the limitations of current low-resolution methodologies. Magn Reson Med 79:2968-2977, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  4. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    Science.gov (United States)

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  5. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  6. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  8. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  9. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  10. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  11. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  12. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  13. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  15. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  16. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  17. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    International Nuclear Information System (INIS)

    Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A

    2005-01-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot

  18. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    Science.gov (United States)

    Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.

    2005-02-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.

  19. Practical Calculation of Thermal Deformation and Manufacture Error uin Surface Grinding

    Institute of Scientific and Technical Information of China (English)

    周里群; 李玉平

    2002-01-01

    The paper submits a method to calculate thermal deformation and manufacture error in surface grinding.The author established a simplified temperature field model.and derived the thermal deformaiton of the ground workpiece,It is found that there exists not only a upwarp thermal deformation,but also a parallel expansion thermal deformation.A upwarp thermal deformation causes a concave shape error on the profile of the workpiece,and a parallel expansion thermal deformation causes a dimension error in height.The calculations of examples are given and compared with presented experiment data.

  20. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  1. On the calculation of errors and choice of the parameters of radioisotope following level meters

    International Nuclear Information System (INIS)

    Kalinin, O.V.; Matveev, V.S.; Khatskevich, M.V.

    1979-01-01

    A method for calculating errors of radioisotope following level meters is considered with account of nonlinearity of the system control units. The statistical method of analysis of linear control systems and the approximated method of statistical linearization of nonlinear systems are used during calculating error of a following level meter. Calculation of a nonlinear system by the method of statistical linearization comprises approximation of a nonlinear characteristic by linearized dependence on the base of a certain criterion. Dispersion calculations of output coordinate of a measuring converter are given for different cases of the system input signal. Dependences of fluctuation error on system parameters for level meters with proportional and relay control have been plotted on the base of the given methods. It is stated, that fluctuation error in both cases depend on time constant of a counting rate meter. Minimal error of the level meter decreases with the growth of operating counting rate and with the increase of nonsensitivity zone width. It is also noted, that parameters of the following level meter should be chosen according to requirements for measuring error, device reliability and time of reading fixing

  2. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  3. Accuracy requirements for the calculation of gravitational waveforms from coalescing compact binaries in numerical relativity

    International Nuclear Information System (INIS)

    Miller, Mark

    2005-01-01

    I discuss the accuracy requirements on numerical relativity calculations of inspiraling compact object binaries whose extracted gravitational waveforms are to be used as templates for matched filtering signal extraction and physical parameter estimation in modern interferometric gravitational wave detectors. Using a post-Newtonian point particle model for the premerger phase of the binary inspiral, I calculate the maximum allowable errors for the mass and relative velocity and positions of the binary during numerical simulations of the binary inspiral. These maximum allowable errors are compared to the errors of state-of-the-art numerical simulations of multiple-orbit binary neutron star calculations in full general relativity, and are found to be smaller by several orders of magnitude. A post-Newtonian model for the error of these numerical simulations suggests that adaptive mesh refinement coupled with second-order accurate finite difference codes will not be able to robustly obtain the accuracy required for reliable gravitational wave extraction on Terabyte-scale computers. I conclude that higher-order methods (higher-order finite difference methods and/or spectral methods) combined with adaptive mesh refinement and/or multipatch technology will be needed for robustly accurate gravitational wave extraction from numerical relativity calculations of binary coalescence scenarios

  4. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  5. Error analysis of pupils in calculating with fractions

    OpenAIRE

    Uranič, Petra

    2016-01-01

    In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...

  6. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  7. Error Analysis of Relative Calibration for RCS Measurement on Ground Plane Range

    Directory of Open Access Journals (Sweden)

    Wu Peng-fei

    2012-03-01

    Full Text Available Ground plane range is a kind of outdoor Radar Cross Section (RCS test range used for static measurement of full-size or scaled targets. Starting from the characteristics of ground plane range, the impact of environments on targets and calibrators is analyzed during calibration in the RCS measurements. The error of relative calibration produced by the different illumination of target and calibrator is studied. The relative calibration technique used in ground plane range is to place the calibrator on a fixed and auxiliary pylon somewhere between the radar and the target under test. By considering the effect of ground reflection and antenna pattern, the relationship between the magnitude of echoes and the position of calibrator is discussed. According to the different distances between the calibrator and target, the difference between free space and ground plane range is studied and the error of relative calibration is calculated. Numerical simulation results are presented with useful conclusions. The relative calibration error varies with the position of calibrator, frequency and antenna beam width. In most case, set calibrator close to the target may keep the error under control.

  8. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  9. SU-F-T-381: Fast Calculation of Three-Dimensional Dose Considering MLC Leaf Positional Errors for VMAT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Katsuta, Y [Takeda General Hospital, Aizuwakamatsu City, Fukushima (Japan); Tohoku University Graduate School of Medicine, Sendal, Miyagi (Japan); Kadoya, N; Jingu, K [Tohoku University Graduate School of Medicine, Sendal, Miyagi (Japan); Shimizu, E; Majima, K [Takeda General Hospital, Aizuwakamatsu City, Fukushima (Japan)

    2016-06-15

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dose calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.

  10. Calculation of magnetic error fields in hybrid insertion devices

    International Nuclear Information System (INIS)

    Savoy, R.; Halbach, K.; Hassenzahl, W.; Hoyer, E.; Humphries, D.; Kincaid, B.

    1989-08-01

    The Advanced Light Source (ALS) at the Lawrence Berkeley Laboratory requires insertion devices with fields sufficiently accurate to take advantage of the small emittance of the ALS electron beam. To maintain the spectral performance of the synchrotron radiation and to limit steering effects on the electron beam these errors must be smaller than 0.25%. This paper develops a procedure for calculating the steering error due to misalignment of the easy axis of the permanent magnet material. The procedure is based on a three dimensional theory of the design of hybrid insertion devices developed by one of us. The acceptable tolerance for easy axis misalignment is found for a 5 cm period undulator proposed for the ALS. 11 refs., 5 figs

  11. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  12. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  13. Treatment Planning System Calculation Errors Are Present in Most Imaging and Radiation Oncology Core-Houston Phantom Failures.

    Science.gov (United States)

    Kerns, James R; Stingo, Francesco; Followill, David S; Howell, Rebecca M; Melancon, Adam; Kry, Stephen F

    2017-08-01

    The anthropomorphic phantom program at the Houston branch of the Imaging and Radiation Oncology Core (IROC-Houston) is an end-to-end test that can be used to determine whether an institution can accurately model, calculate, and deliver an intensity modulated radiation therapy dose distribution. Currently, institutions that do not meet IROC-Houston's criteria have no specific information with which to identify and correct problems. In the present study, an independent recalculation system was developed to identify treatment planning system (TPS) calculation errors. A recalculation system was commissioned and customized using IROC-Houston measurement reference dosimetry data for common linear accelerator classes. Using this system, 259 head and neck phantom irradiations were recalculated. Both the recalculation and the institution's TPS calculation were compared with the delivered dose that was measured. In cases in which the recalculation was statistically more accurate by 2% on average or 3% at a single measurement location than was the institution's TPS, the irradiation was flagged as having a "considerable" institutional calculation error. The error rates were also examined according to the linear accelerator vendor and delivery technique. Surprisingly, on average, the reference recalculation system had better accuracy than the institution's TPS. Considerable TPS errors were found in 17% (n=45) of the head and neck irradiations. Also, 68% (n=13) of the irradiations that failed to meet the IROC-Houston criteria were found to have calculation errors. Nearly 1 in 5 institutions were found to have TPS errors in their intensity modulated radiation therapy calculations, highlighting the need for careful beam modeling and calculation in the TPS. An independent recalculation system can help identify the presence of TPS errors and pass on the knowledge to the institution. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Efficacy of surface error corrections to density functional theory calculations of vacancy formation energy in transition metals.

    Science.gov (United States)

    Nandi, Prithwish Kumar; Valsakumar, M C; Chandra, Sharat; Sahu, H K; Sundar, C S

    2010-09-01

    We calculate properties like equilibrium lattice parameter, bulk modulus and monovacancy formation energy for nickel (Ni), iron (Fe) and chromium (Cr) using Kohn-Sham density functional theory (DFT). We compare the relative performance of local density approximation (LDA) and generalized gradient approximation (GGA) for predicting such physical properties for these metals. We also make a relative study between two different flavors of GGA exchange correlation functional, namely PW91 and PBE. These calculations show that there is a discrepancy between DFT calculations and experimental data. In order to understand this discrepancy in the calculation of vacancy formation energy, we introduce a correction for the surface intrinsic error corresponding to an exchange correlation functional using the scheme implemented by Mattsson et al (2006 Phys. Rev. B 73 195123) and compare the effectiveness of the correction scheme for Al and the 3d transition metals.

  15. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  16. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  17. Calculation errors of Set-up in patients with tumor location of prostate. Exploratory study; Calculo de errores de Set-up en pacientes con localizacion tumoral de prostata. Estudio exploratorio

    Energy Technology Data Exchange (ETDEWEB)

    Donis Gil, S.; Robayna Duque, B. E.; Jimenez Sosa, A.; Hernandez Armas, O.; Gonzalez Martin, A. E.; Hernandez Armas, J.

    2013-07-01

    The calculation of SM is done from errors in positioning (set-up). These errors are calculated from movements in 3D of the patient. This paper is an exploratory study of 20 patients with tumor location of prostate in which errors of set-up for two protocols of work are evaluated. (Author)

  18. Calculation of track and vertex errors for detector design studies

    International Nuclear Information System (INIS)

    Harr, R.

    1995-01-01

    The Kalman Filter technique has come into wide use for charged track reconstruction in high-energy physics experiments. It is also well suited for detector design studies, allowing for the efficient estimation of optimal track covariance matrices without the need of a hit level Monte Carlo simulation. Although much has been published about the Kalman filter equations, there is a lack of previous literature explaining how to implement the equations. In this paper, the operators necessary to implement the Kalman filter equations for two common detector configurations are worked out: a central detector in a uniform solenoidal magnetic field, and a fixed-target detector with no magnetic field in the region of the interactions. With the track covariance matrices in hand, vertex and invariant mass errors are readily calculable. These quantities are particularly interesting for evaluating experiments designed to study weakly decaying particles which give rise to displaced vertices. The optimal vertex errors are obtained via a constrained vertex fit. Solutions are presented to the constrained vertex problem with and without kinematic constraints. Invariant mass errors are obtained via propagation of errors; the use of vertex constrained track parameters is discussed. Many of the derivations are new or previously unpublished

  19. Error reduction techniques for Monte Carlo neutron transport calculations

    International Nuclear Information System (INIS)

    Ju, J.H.W.

    1981-01-01

    Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas

  20. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  1. Repair for scattering expansion truncation errors in transport calculations

    International Nuclear Information System (INIS)

    Emmett, M.B.; Childs, R.L.; Rhoades, W.A.

    1980-01-01

    Legendre expansion of angular scattering distributions is usually limited to P 3 in practical transport calculations. This truncation often results in non-trivial errors, especially alternating negative and positive lateral scattering peaks. The effect is especially prominent in forward-peaked situations such as the within-group component of the Compton Scattering of gammas. Increasing the expansion to P 7 often makes the peaks larger and narrower. Ward demonstrated an accurate repair, but his method requires special cross section sets and codes. The DOT IV code provides fully-compatible, but heuristic, repair of the erroneous scattering. An analytical Klein-Nishina estimator, newly available in the MORSE code, allows a test of this method. In the MORSE calculation, particle scattering histories are calculated in the usual way, with scoring by an estimator routine at each collision site. Results for both the conventional P 3 estimator and the analytical estimator were obtained. In the DOT calculation, the source moments are expanded into the directional representation at each iteration. Optionally a sorting procedure removes all negatives, and removes enough small positive values to restore particle conservation. The effect of this is to replace the alternating positive and negative values with positive values of plausible magnitude. The accuracy of those values is examined herein

  2. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Science.gov (United States)

    Esselborn, Saskia; Rudenko, Sergei; Schöne, Tilo

    2018-03-01

    Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year), and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), and the Goddard Space Flight Center (GSFC). The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability) with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr-1 (27 % of the corresponding sea level variability) and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr-1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test orbits calculated at GFZ, the sources of the

  3. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Directory of Open Access Journals (Sweden)

    S. Esselborn

    2018-03-01

    orbits calculated at GFZ, the sources of the observed orbit-related errors are further investigated. The main contributors on all timescales are uncertainties in Earth's time-variable gravity field models and on annual to interannual timescales discrepancies of the tracking station subnetworks, i.e. satellite laser ranging (SLR and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS.

  4. Characterization of model errors in the calculation of tangent heights for atmospheric infrared limb measurements

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2014-12-01

    Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.

  5. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  6. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    Science.gov (United States)

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  7. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  8. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    International Nuclear Information System (INIS)

    Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)

  9. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587

  10. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  11. Prevalence of refractive errors in the Slovak population calculated using the Gullstrand schematic eye model.

    Science.gov (United States)

    Popov, I; Valašková, J; Štefaničková, J; Krásnik, V

    2017-01-01

    A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.

  12. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  13. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  14. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  15. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  16. Event-Related Potentials for Post-Error and Post-Conflict Slowing

    Science.gov (United States)

    Chang, Andrew; Chen, Chien-Chung; Li, Hsin-Hung; Li, Chiang-Shan R.

    2014-01-01

    In a reaction time task, people typically slow down following an error or conflict, each called post-error slowing (PES) and post-conflict slowing (PCS). Despite many studies of the cognitive mechanisms, the neural responses of PES and PCS continue to be debated. In this study, we combined high-density array EEG and a stop-signal task to examine event-related potentials of PES and PCS in sixteen young adult participants. The results showed that the amplitude of N2 is greater during PES but not PCS. In contrast, the peak latency of N2 is longer for PCS but not PES. Furthermore, error-positivity (Pe) but not error-related negativity (ERN) was greater in the stop error trials preceding PES than non-PES trials, suggesting that PES is related to participants' awareness of the error. Together, these findings extend earlier work of cognitive control by specifying the neural correlates of PES and PCS in the stop signal task. PMID:24932780

  17. A lower bound on the relative error of mixed-state cloning and related operations

    International Nuclear Information System (INIS)

    Rastegin, A E

    2003-01-01

    We extend the concept of the relative error to mixed-state cloning and related physical operations, in which the ancilla contains some information a priori about the input state. The lower bound on the relative error is obtained. It is shown that this result provides further support for a stronger no-cloning theorem

  18. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  19. Maths anxiety and medication dosage calculation errors: A scoping review.

    Science.gov (United States)

    Williams, Brett; Davis, Samantha

    2016-09-01

    A student's accuracy on drug calculation tests may be influenced by maths anxiety, which can impede one's ability to understand and complete mathematic problems. It is important for healthcare students to overcome this barrier when calculating drug dosages in order to avoid administering the incorrect dose to a patient when in the clinical setting. The aim of this study was to examine the effects of maths anxiety on healthcare students' ability to accurately calculate drug dosages by performing a scoping review of the existing literature. This review utilised a six-stage methodology using the following databases; CINAHL, Embase, Medline, Scopus, PsycINFO, Google Scholar, Trip database (http://www.tripdatabase.com/) and Grey Literature report (http://www.greylit.org/). After an initial title/abstract review of relevant papers, and then full text review of the remaining papers, six articles were selected for inclusion in this study. Of the six articles included, there were three experimental studies, two quantitative studies and one mixed method study. All studies addressed nursing students and the presence of maths anxiety. No relevant studies from other disciplines were identified in the existing literature. Three studies took place in the U.S, the remainder in Canada, Australia and United Kingdom. Upon analysis of these studies, four factors including maths anxiety were identified as having an influence on a student's drug dosage calculation abilities. Ultimately, the results from this review suggest more research is required in nursing and other relevant healthcare disciplines regarding the effects of maths anxiety on drug dosage calculations. This additional knowledge will be important to further inform development of strategies to decrease the potentially serious effects of errors in drug dosage calculation to patient safety. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Making related errors facilitates learning, but learners do not know it.

    Science.gov (United States)

    Huelser, Barbie J; Metcalfe, Janet

    2012-05-01

    Producing an error, so long as it is followed by corrective feedback, has been shown to result in better retention of the correct answers than does simply studying the correct answers from the outset. The reasons for this surprising finding, however, have not been investigated. Our hypothesis was that the effect might occur only when the errors produced were related to the targeted correct response. In Experiment 1, participants studied either related or unrelated word pairs, manipulated between participants. Participants either were given the cue and target to study for 5 or 10 s or generated an error in response to the cue for the first 5 s before receiving the correct answer for the final 5 s. When the cues and targets were related, error-generation led to the highest correct retention. However, consistent with the hypothesis, no benefit was derived from generating an error when the cue and target were unrelated. Latent semantic analysis revealed that the errors generated in the related condition were related to the target, whereas they were not related to the target in the unrelated condition. Experiment 2 replicated these findings in a within-participants design. We found, additionally, that people did not know that generating an error enhanced memory, even after they had just completed the task that produced substantial benefits.

  1. Neutron data error estimate of criticality calculations for lattice in shielding containers with metal fissionable materials

    International Nuclear Information System (INIS)

    Vasil'ev, A.P.; Krepkij, A.S.; Lukin, A.V.; Mikhal'kova, A.G.; Orlov, A.I.; Perezhogin, V.D.; Samojlova, L.Yu.; Sokolov, Yu.A.; Terekhin, V.A.; Chernukhin, Yu.I.

    1991-01-01

    Critical mass experiments were performed using assemblies which simulated one-dimensional lattice consisting of shielding containers with metal fissile materials. Calculations of the criticality of the above assemblies were carried out using the KLAN program with the BAS neutron constants. Errors in the calculations of the criticality for one-, two-, and three-dimensional lattices are estimated. 3 refs.; 1 tab

  2. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  3. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  4. Analysis of causes and effects errors in calculation of rolling slewing bearings capacity

    Directory of Open Access Journals (Sweden)

    Marek Krynke

    2016-09-01

    Full Text Available In the paper the basic design features and essential assumption of calculation models as well as the factors influencing quality improvement and improvement of calculation process of bearing capacity of rolling slewing bearings are discussed. The aim of conducted research is the identification and elimination of sources of errors in determining the characteristics of slewing bearing capacity. The result of the research aims atdeterminingthe risk of making mistakes and specifying tips for designers of slewing bearings. It is shown that there is a necessity fora numerical method to be applied and that real conditions of bearing work must necessarily be taken into account e.g. carrying structure deformations as the first ones.

  5. A dose error evaluation study for 4D dose calculations

    Science.gov (United States)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  6. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  7. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  8. Relating physician's workload with errors during radiation therapy planning.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B

    2014-01-01

    To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.

  9. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  10. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  11. Refractive error magnitude and variability: Relation to age.

    Science.gov (United States)

    Irving, Elizabeth L; Machan, Carolyn M; Lam, Sharon; Hrynchak, Patricia K; Lillakas, Linda

    2018-03-19

    To investigate mean ocular refraction (MOR) and astigmatism, over the human age range and compare severity of refractive error to earlier studies from clinical populations having large age ranges. For this descriptive study patient age, refractive error and history of surgery affecting refraction were abstracted from the Waterloo Eye Study database (WatES). Average MOR, standard deviation of MOR and astigmatism were assessed in relation to age. Refractive distributions for developmental age groups were determined. MOR standard deviation relative to average MOR was evaluated. Data from earlier clinically based studies with similar age ranges were compared to WatES. Right eye refractive errors were available for 5933 patients with no history of surgery affecting refraction. Average MOR varied with age. Children <1 yr of age were the most hyperopic (+1.79D) and the highest magnitude of myopia was found at 27yrs (-2.86D). MOR distributions were leptokurtic, and negatively skewed. The mode varied with age group. MOR variability increased with increasing myopia. Average astigmatism increased gradually to age 60 after which it increased at a faster rate. By 85+ years it was 1.25D. J 0 power vector became increasingly negative with age. J 45 power vector values remained close to zero but variability increased at approximately 70 years. In relation to comparable earlier studies, WatES data were most myopic. Mean ocular refraction and refractive error distribution vary with age. The highest magnitude of myopia is found in young adults. Similar to prevalence, the severity of myopia also appears to have increased since 1931. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  12. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    Science.gov (United States)

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  13. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    provide the user with programs to calculate and incorporate errors into sample size estimation.

  15. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  16. How to deal with multiple binding poses in alchemical relative protein-ligand binding free energy calculations.

    Science.gov (United States)

    Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle

    2015-06-09

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the

  17. How To Deal with Multiple Binding Poses in Alchemical Relative Protein–Ligand Binding Free Energy Calculations

    Science.gov (United States)

    2016-01-01

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the

  18. [Event-related EEG potentials associated with error detection in psychiatric disorder: literature review].

    Science.gov (United States)

    Balogh, Lívia; Czobor, Pál

    2010-01-01

    Error-related bioelectric signals constitute a special subgroup of event-related potentials. Researchers have identified two evoked potential components to be closely related to error processing, namely error-related negativity (ERN) and error-positivity (Pe), and they linked these to specific cognitive functions. In our article first we give a brief description of these components, then based on the available literature, we review differences in error-related evoked potentials observed in patients across psychiatric disorders. The PubMed and Medline search engines were used in order to identify all relevant articles, published between 2000 and 2009. For the purpose of the current paper we reviewed publications summarizing results of clinical trials. Patients suffering from schizophrenia, anorexia nervosa or borderline personality disorder exhibited a decrease in the amplitude of error-negativity when compared with healthy controls, while in cases of depression and anxiety an increase in the amplitude has been observed. Some of the articles suggest specific personality variables, such as impulsivity, perfectionism, negative emotions or sensitivity to punishment to underlie these electrophysiological differences. Research in the field of error-related electric activity has come to the focus of psychiatry research only recently, thus the amount of available data is significantly limited. However, since this is a relatively new field of research, the results available at present are noteworthy and promising for future electrophysiological investigations in psychiatric disorders.

  19. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non-intercepted pr....... Strategies to prevent future prescription errors could usefully focus on integrated computerised systems that can aid dose calculations and reduce transcription errors between databases....

  20. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  1. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  2. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  3. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  4. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  5. A method for local transport analysis in tokamaks with error calculation

    International Nuclear Information System (INIS)

    Hogeweij, G.M.D.; Hordosy, G.; Lopes Cardozo, N.J.

    1989-01-01

    Global transport studies have revealed that heat transport in a tokamak is anomalous, but cannot provide information about the nature of the anomaly. Therefore, local transport analysis is essential for the study of anomalous transport. However, the determination of local transport coefficients is not a trivial affair. Generally speaking one can either directly measure the heat diffusivity, χ, by means of heat pulse propagation analysis, or deduce the profile of χ from measurements of the profiles of the temperature, T, and the power deposition. Here we are concerned only with the latter method, the local power balance analysis. For the sake of clarity heat diffusion only is considered: ρ=-gradT/q (1) where ρ=κ -1 =(nχ) -1 is the heat resistivity and q is the heat flux per unit area. It is assumed that the profiles T(r) and q(r) are given with some experimental error. In practice T(r) is measured directly, e.g. from ECE spectroscopy, while q(r) is deduced from the power deposition and loss profiles. The latter cannot be measured directly and is partly determined on the basis of models. This complication will not be considered here. Since in eq. (1) the gradient of T appears, noise on T can severely affect the solution ρ. This means that in general some form of smoothing must be applied. A criterion is needed to select the optimal smoothing. Too much smoothing will wipe out the details, whereas with too little smoothing the noise will distort the reconstructed profile of ρ. Here a new method to solve eq. (1) is presented which expresses ρ(r) as a cosine-series. The coefficients of this series are given as linear combinations of the Fourier coefficients of the measured T- and q-profiles. This formulation allows 1) the stable and accurate calculation of the ρ-profile, and 2) the analytical calculation of the error in this profile. (author) 5 refs., 3 figs

  6. Errors due to the cylindrical cell approximation in lattice calculations

    Energy Technology Data Exchange (ETDEWEB)

    Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1960-06-15

    It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)

  7. Calculation of stochastic broadening due to noise and field errors in the simple map in action-angle coordinates

    Science.gov (United States)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.

  8. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  9. Calculation and simulation on mid-spatial frequency error in continuous polishing

    International Nuclear Information System (INIS)

    Xie Lei; Zhang Yunfan; You Yunfeng; Ma Ping; Liu Yibin; Yan Dingyao

    2013-01-01

    Based on theoretical model of continuous polishing, the influence of processing parameters on the polishing result was discussed. Possible causes of mid-spatial frequency error in the process were analyzed. The simulation results demonstrated that the low spatial frequency error was mainly caused by large rotating ratio. The mid-spatial frequency error would decrease as the low spatial frequency error became lower. The regular groove shape was the primary reason of the mid-spatial frequency error. When irregular and fitful grooves were adopted, the mid-spatial frequency error could be lessened. Moreover, the workpiece swing could make the polishing process more uniform and reduce the mid-spatial frequency error caused by the fix-eccentric plane polishing. (authors)

  10. Masked and unmasked error-related potentials during continuous control and feedback

    Science.gov (United States)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the

  11. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....

  12. Relating faults in diagnostic reasoning with diagnostic errors and patient harm.

    NARCIS (Netherlands)

    Zwaan, L.; Thijs, A.; Wagner, C.; Wal, G. van der; Timmermans, D.R.M.

    2012-01-01

    Purpose: The relationship between faults in diagnostic reasoning, diagnostic errors, and patient harm has hardly been studied. This study examined suboptimal cognitive acts (SCAs; i.e., faults in diagnostic reasoning), related them to the occurrence of diagnostic errors and patient harm, and studied

  13. Assessing the reliability of calculated catalytic ammonia synthesis rates

    DEFF Research Database (Denmark)

    Medford, Andrew James; Wellendorff, Jess; Vojvodic, Aleksandra

    2014-01-01

    We introduce a general method for estimating the uncertainty in calculated materials properties based on density functional theory calculations. We illustrate the approach for a calculation of the catalytic rate of ammonia synthesis over a range of transition-metal catalysts. The correlation...... between errors in density functional theory calculations is shown to play an important role in reducing the predicted error on calculated rates. Uncertainties depend strongly on reaction conditions and catalyst material, and the relative rates between different catalysts are considerably better described...

  14. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  17. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. The statistical error of Green's function Monte Carlo

    International Nuclear Information System (INIS)

    Ceperley, D.M.

    1986-01-01

    The statistical error in the ground state energy as calculated by Green's Function Monte Carlo (GFMC) is analyzed and a simple approximate formula is derived which relates the error to the number of steps of the random walk, the variational energy of the trial function, and the time step of the random walk. Using this formula it is argued that as the thermodynamic limit is approached with N identical molecules, the computer time needed to reach a given error per molecule increases as N/sup n/ where 0.5 < b < 1.5 and as the nuclear charge Z of a system is increased the computer time necessary to reach a given error grows as Z/sup 5.5/. Thus GFMC simulations will be most useful for calculating the properties of low Z elements. The implications for choosing the optimal trial function from a series of trial functions is also discussed

  19. Errors in the calculation of sub-soil moisture probe by equivalent moisture content technique

    International Nuclear Information System (INIS)

    Lakshmipathy, A.V.; Gangadharan, P.

    1982-01-01

    The size of the soil sample required to obtain the saturation response, with a neutron moisture probe is quite large and this poses practical problems of handling and mixing large amounts of samples for absolute laboratory calibration. Hydrogenous materials are used as a substitute for water in the equivalent moisture content technique, for calibration of soil moisture probes. In this it is assumed that only hydrogen of the bulk sample is responsible for the slowing down of fast neutrons and the slow neutron countrate is correlated to equivalent water content by considering the hydrogen density of sample. It is observed that the higher atomic number elements present in water equivalent media also affect the response of the soil moisture probe. Hence calculations, as well as experiments, were undertaken to know the order of error introduced by this technique. The thermal and slow neutron flux distribution around the BF 3 counter of a sub-soil moisture probe is calculated using three group diffusion theory. The response of the probe corresponding to different equivalent moisture content of hydrogenous media, is calculated taking into consideration the effective length of BF 3 counter. Soil with hydrogenous media such as polyethylene, sugar and water are considered for calculation, to verify the suitability of these materials as substitute for water during calibration of soil moisture probe. Experiments were conducted, to verify the theoretically calculated values. (author)

  20. Intelligence and Neurophysiological Markers of Error Monitoring Relate to Children's Intellectual Humility.

    Science.gov (United States)

    Danovitch, Judith H; Fisher, Megan; Schroder, Hans; Hambrick, David Z; Moser, Jason

    2017-09-18

    This study explored developmental and individual differences in intellectual humility (IH) among 127 children ages 6-8. IH was operationalized as children's assessment of their knowledge and willingness to delegate scientific questions to experts. Children completed measures of IH, theory of mind, motivational framework, and intelligence, and neurophysiological measures indexing early (error-related negativity [ERN]) and later (error positivity [Pe]) error-monitoring processes related to cognitive control. Children's knowledge self-assessment correlated with question delegation, and older children showed greater IH than younger children. Greater IH was associated with higher intelligence but not with social cognition or motivational framework. ERN related to self-assessment, whereas Pe related to question delegation. Thus, children show separable epistemic and social components of IH that may differentially contribute to metacognition and learning. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  1. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  2. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  3. Error-related brain activity predicts cocaine use after treatment at 3-month follow-up.

    Science.gov (United States)

    Marhe, Reshmi; van de Wetering, Ben J M; Franken, Ingmar H A

    2013-04-15

    Relapse after treatment is one of the most important problems in drug dependency. Several studies suggest that lack of cognitive control is one of the causes of relapse. In this study, a relative new electrophysiologic index of cognitive control, the error-related negativity, is investigated to examine its suitability as a predictor of relapse. The error-related negativity was measured in 57 cocaine-dependent patients during their first week in detoxification treatment. Data from 49 participants were used to predict cocaine use at 3-month follow-up. Cocaine use at follow-up was measured by means of self-reported days of cocaine use in the last month verified by urine screening. A multiple hierarchical regression model was used to examine the predictive value of the error-related negativity while controlling for addiction severity and self-reported craving in the week before treatment. The error-related negativity was the only significant predictor in the model and added 7.4% of explained variance to the control variables, resulting in a total of 33.4% explained variance in the prediction of days of cocaine use at follow-up. A reduced error-related negativity measured during the first week of treatment was associated with more days of cocaine use at 3-month follow-up. Moreover, the error-related negativity was a stronger predictor of recent cocaine use than addiction severity and craving. These results suggest that underactive error-related brain activity might help to identify patients who are at risk of relapse as early as in the first week of detoxification treatment. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Linear constraint relations in biochemical reaction systems: I. Classification of the calculability and the balanceability of conversion rates.

    Science.gov (United States)

    van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C

    1994-01-05

    Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons

  5. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  6. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  7. The role of hand of error and stimulus orientation in the relationship between worry and error-related brain activity: Implications for theory and practice.

    Science.gov (United States)

    Lin, Yanli; Moran, Tim P; Schroder, Hans S; Moser, Jason S

    2015-10-01

    Anxious apprehension/worry is associated with exaggerated error monitoring; however, the precise mechanisms underlying this relationship remain unclear. The current study tested the hypothesis that the worry-error monitoring relationship involves left-lateralized linguistic brain activity by examining the relationship between worry and error monitoring, indexed by the error-related negativity (ERN), as a function of hand of error (Experiment 1) and stimulus orientation (Experiment 2). Results revealed that worry was exclusively related to the ERN on right-handed errors committed by the linguistically dominant left hemisphere. Moreover, the right-hand ERN-worry relationship emerged only when stimuli were presented horizontally (known to activate verbal processes) but not vertically. Together, these findings suggest that the worry-ERN relationship involves left hemisphere verbal processing, elucidating a potential mechanism to explain error monitoring abnormalities in anxiety. Implications for theory and practice are discussed. © 2015 Society for Psychophysiological Research.

  8. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  9. Running Records and First Grade English Learners: An Analysis of Language Related Errors

    Science.gov (United States)

    Briceño, Allison; Klein, Adria F.

    2018-01-01

    The purpose of this study was to determine if first-grade English Learners made patterns of language related errors when reading, and if so, to identify those patterns and how teachers coded language related errors when analyzing English Learners' running records. Using research from the fields of both literacy and Second Language Acquisition, we…

  10. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  11. A Neural Circuit Mechanism for the Involvements of Dopamine in Effort-Related Choices: Decay of Learned Values, Secondary Effects of Depletion, and Calculation of Temporal Difference Error

    Science.gov (United States)

    2018-01-01

    Abstract Dopamine has been suggested to be crucially involved in effort-related choices. Key findings are that dopamine depletion (i) changed preference for a high-cost, large-reward option to a low-cost, small-reward option, (ii) but not when the large-reward option was also low-cost or the small-reward option gave no reward, (iii) while increasing the latency in all the cases but only transiently, and (iv) that antagonism of either dopamine D1 or D2 receptors also specifically impaired selection of the high-cost, large-reward option. The underlying neural circuit mechanisms remain unclear. Here we show that findings i–iii can be explained by the dopaminergic representation of temporal-difference reward-prediction error (TD-RPE), whose mechanisms have now become clarified, if (1) the synaptic strengths storing the values of actions mildly decay in time and (2) the obtained-reward-representing excitatory input to dopamine neurons increases after dopamine depletion. The former is potentially caused by background neural activity–induced weak synaptic plasticity, and the latter is assumed to occur through post-depletion increase of neural activity in the pedunculopontine nucleus, where neurons representing obtained reward exist and presumably send excitatory projections to dopamine neurons. We further show that finding iv, which is nontrivial given the suggested distinct functions of the D1 and D2 corticostriatal pathways, can also be explained if we additionally assume a proposed mechanism of TD-RPE calculation, in which the D1 and D2 pathways encode the values of actions with a temporal difference. These results suggest a possible circuit mechanism for the involvements of dopamine in effort-related choices and, simultaneously, provide implications for the mechanisms of TD-RPE calculation. PMID:29468191

  12. An investigation of Saudi Arabian MR radiographers' knowledge and confidence in relation to MR image-quality-related errors

    International Nuclear Information System (INIS)

    Alsharif, W.; Davis, M.; McGee, A.; Rainford, L.

    2017-01-01

    Objective: To investigate MR radiographers' current knowledge base and confidence level in relation to quality-related errors within MR images. Method: Thirty-five MR radiographers within 16 MRI departments in the Kingdom of Saudi Arabia (KSA) independently reviewed a prepared set of 25 MR images, naming the error, specifying the error-correction strategy, scoring how confident they were in recognising this error and suggesting a correction strategy by using a scale of 1–100. The datasets were obtained from MRI departments in the KSA to represent the range of images which depicted excellent, acceptable and poor image quality. Results: The findings demonstrated a low level of radiographer knowledge in identifying the type of quality errors and when suggesting an appropriate strategy to rectify those errors. The findings show that only (n = 7) 20% of the radiographers could correctly name what the quality errors were in 70% of the dataset, and none of the radiographers correctly specified the error-correction strategy in more than 68% of the MR datasets. The confidence level of radiography participants in their ability to state the type of image quality errors was significantly different (p < 0.001) for who work in different hospital types. Conclusion: The findings of this study suggest there is a need to establish a national association for MR radiographers to monitor training and the development of postgraduate MRI education in Saudi Arabia to improve the current status of the MR radiographers' knowledge and direct high quality service delivery. - Highlights: • MR radiographers recognised the existence of the image quality related errors. • A few MR radiographers were able to correctly identify which image quality errors were being shown. • None of MR radiographers were able to correctly specify error-correction strategy of the image quality errors. • A low level of knowledge was demonstrated in identifying and rectify image quality errors.

  13. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  15. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  16. Error-related negativity and tic history in pediatric obsessive-compulsive disorder.

    Science.gov (United States)

    Hanna, Gregory L; Carrasco, Melisa; Harbin, Shannon M; Nienhuis, Jenna K; LaRosa, Christina E; Chen, Poyu; Fitzgerald, Kate D; Gehring, William J

    2012-09-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes in patients with tic-related OCD, patients with non-tic-related OCD, and healthy controls. The ERN, correct response negativity, and error number were measured during an Eriksen flanker task to assess performance monitoring in 44 youth with a lifetime diagnosis of OCD and 44 matched healthy controls ranging in age from 10 to 19 years. Nine youth with OCD had a lifetime history of tics. ERN amplitude was significantly increased in patients with OCD compared with healthy controls. ERN amplitude was significantly larger in patients with non-tic-related OCD than in patients with tic-related OCD or controls. ERN amplitude had a significant negative correlation with age in healthy controls but not in patients with OCD. Instead, in patients with non-tic-related OCD, ERN amplitude had a significant positive correlation with age at onset of OCD symptoms. ERN amplitude in patients was unrelated to OCD symptom severity, current diagnostic status, or treatment effects. The results provide further evidence of increased error-related brain activity in pediatric OCD. The difference in the ERN between patients with tic-related and those with non-tic-related OCD provides preliminary evidence of a neurobiological difference between these two OCD subtypes. The results indicate the ERN is a trait-like measurement that may serve as a biomarker for non-tic-related OCD. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  17. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder

    Science.gov (United States)

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…

  18. Relative power density distribution calculations of the Kori unit 1 pressurized water reactor with full-scope explicit modeling of monte carlo simulation

    International Nuclear Information System (INIS)

    Kim, J. O.; Kim, J. K.

    1997-01-01

    Relative power density distributions of the Kori unit 1 pressurized water reactor calculated by Monte Carlo modeling with the MCNP code. The Kori unit 1 core is modeled on a three-dimensional representation of the one-eighth of the reactor in-vessel component with reflective boundaries at 0 and 45 degrees. The axial core model is based on half core symmetry and is divided into four axial segments. Fission reaction density in each rod is calculated by following 100 cycles with 5,000 test neutrons in each cycle after starting with a localized neutron source and ten noncontributing settle cycles. Relative assembly power distributions are calculated from fission reaction densities of rods in assembly. After 100 cycle calculations, the system coverages to a κ value of 1.00039 ≥ 0.00084. Relative assembly power distribution is nearly the same with that of the Kori unit 1 FSAR. Applicability of the full-scope Monte Carlo simulation in the power distribution calculation is examined by the relative root mean square error of 2.159%. (author)

  19. Age-related changes in error processing in young children: A school-based investigation

    Directory of Open Access Journals (Sweden)

    Jennie K. Grammer

    2014-07-01

    Full Text Available Growth in executive functioning (EF skills play a role children's academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN and the error positivity (Pe, over this period is inconclusive. Data were recorded in a school setting from 3- to 7-year-old children (N = 96, mean age = 5 years 11 months as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3 and 7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing.

  20. Improvements in the error calculation of the action of a kicked beam

    CERN Document Server

    Sherman, Alexander Charles

    2013-01-01

    This report details a new calculation for the action performed in the optics measurement and correction software. The action of a kicked beam is used to calculate the dynamic aperture and detuning with amplitude. The current method of calculation has a large uncertainty due to the use of all BPMs (including those near interaction points and ones which are malfunctioning) and the model beta function. Instead, only good BPMs are kept and the measured beta function from phase is used, and significant decreases are seen in the relative uncertainty of the action.

  1. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  2. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  3. Working memory capacity and task goals modulate error-related ERPs.

    Science.gov (United States)

    Coleman, James R; Watson, Jason M; Strayer, David L

    2018-03-01

    The present study investigated individual differences in information processing following errant behavior. Participants were initially classified as high or as low working memory capacity using the Operation Span Task. In a subsequent session, they then performed a high congruency version of the flanker task under both speed and accuracy stress. We recorded ERPs and behavioral measures of accuracy and response time in the flanker task with a primary focus on processing following an error. The error-related negativity was larger for the high working memory capacity group than for the low working memory capacity group. The positivity following an error (Pe) was modulated to a greater extent by speed-accuracy instruction for the high working memory capacity group than for the low working memory capacity group. These data help to explicate the neural bases of individual differences in working memory capacity and cognitive control. © 2017 Society for Psychophysiological Research.

  4. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    Science.gov (United States)

    2016-03-01

    CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS WITH MEASUREMENTS OF FORECAST UNCERTAINTY by Nicholas M. Chisler March 2016 Thesis Advisor...March 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE RELATING TROPICAL CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS...WITH MEASUREMENTS OF FORECAST UNCERTAINTY 5. FUNDING NUMBERS 6. AUTHOR(S) Nicholas M. Chisler 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  5. Effect of error in crack length measurement on maximum load fracture toughness of Zr-2.5Nb pressure tube material

    International Nuclear Information System (INIS)

    Bind, A.K.; Sunil, Saurav; Singh, R.N.; Chakravartty, J.K.

    2016-03-01

    Recently it was found that maximum load toughness (J max ) for Zr-2.5Nb pressure tube material was practically unaffected by error in Δ a . To check the sensitivity of the J max to error in Δ a measurement, the J max was calculated assuming no crack growth up to the maximum load (P max ) for as received and hydrogen charged Zr-2.5Nb pressure tube material. For load up to the P max , the J values calculated assuming no crack growth (J NC ) were slightly higher than that calculated based on Δ a measured using DCPD technique (JDCPD). In general, error in the J calculation found to be increased exponentially with Δ a . The error in J max calculation was increased with an increase in Δ a and a decrease in J max . Based on deformation theory of J, an analytic criterion was developed to check the insensitivity of the J max to error in Δ a . There was very good linear relation was found between the J max calculated based on Δ a measured using DCPD technique and the J max calculated assuming no crack growth. This relation will be very useful to calculate J max without measuring the crack growth during fracture test especially for irradiated material. (author)

  6. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  7. The orthopaedic error index: development and application of a novel national indicator for assessing the relative safety of hospital care using a cross-sectional approach.

    Science.gov (United States)

    Panesar, Sukhmeet S; Netuveli, Gopalakrishnan; Carson-Stevens, Andrew; Javad, Sundas; Patel, Bhavesh; Parry, Gareth; Donaldson, Liam J; Sheikh, Aziz

    2013-11-21

    The Orthopaedic Error Index for hospitals aims to provide the first national assessment of the relative safety of provision of orthopaedic surgery. Cross-sectional study (retrospective analysis of records in a database). The National Reporting and Learning System is the largest national repository of patient-safety incidents in the world with over eight million error reports. It offers a unique opportunity to develop novel approaches to enhancing patient safety, including investigating the relative safety of different healthcare providers and specialties. We extracted all orthopaedic error reports from the system over 1 year (2009-2010). The Orthopaedic Error Index was calculated as a sum of the error propensity and severity. All relevant hospitals offering orthopaedic surgery in England were then ranked by this metric to identify possible outliers that warrant further attention. 155 hospitals reported 48 971 orthopaedic-related patient-safety incidents. The mean Orthopaedic Error Index was 7.09/year (SD 2.72); five hospitals were identified as outliers. Three of these units were specialist tertiary hospitals carrying out complex surgery; the remaining two outlier hospitals had unusually high Orthopaedic Error Indexes: mean 14.46 (SD 0.29) and 15.29 (SD 0.51), respectively. The Orthopaedic Error Index has enabled identification of hospitals that may be putting patients at disproportionate risk of orthopaedic-related iatrogenic harm and which therefore warrant further investigation. It provides the prototype of a summary index of harm to enable surveillance of unsafe care over time across institutions. Further validation and scrutiny of the method will be required to assess its potential to be extended to other hospital specialties in the UK and also internationally to other health systems that have comparable national databases of patient-safety incidents.

  8. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  9. Errors in the calculation of new salary positions and performance premiums – 2017 MERIT exercise

    CERN Multimedia

    Staff Association

    2017-01-01

    Following the receipt of the letters dated May 12th announcing the qualification of their performance (MERIT 2017), and the notification of their salary slips for the month of May, several colleagues have come to us to enquire about the calculation of salary increases and performance premiums. After verification, the Staff Association has informed the Management, in a meeting of the Standing Concertation Committee on June 1st, about errors owing to rounding in the applied formulas. James Purvis, Head of HR department, has published in the CERN Bulletin dated July 18th an article, under the heading “Better precision (rounding)”, that gives a short explanation of these rounding effects. But we want to further bring you more precise explanations. Advancement On the salary slips for the month of May, the calculations of the advancement and new salary positions were done, by the services of administrative computing in the FAP department, on the basis of the salary, rounded to the nearest franc...

  10. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder (OCD)

    Science.gov (United States)

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective The error-related negativity (ERN) is a negative deflection in the event-related potential following an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relationship of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes in patients with tic-related OCD, patients with non-tic-related OCD, and healthy controls. Method The ERN, correct response negativity, and error number were measured during an Eriksen flanker task to assess performance monitoring in 44 youth with a lifetime diagnosis of OCD and 44 matched healthy controls ranging in age from 10 to 19 years. Nine youth with OCD had a lifetime history of tics. Results ERN amplitudewas significantly increased in OCD patients compared to healthy controls. ERN amplitude was significantly larger in patients with non-tic-related OCD than either patients with tic-related OCD or controls. ERN amplitude had a significant negative correlation with age in healthy controls but not patients with OCD. Instead, in patients with non-tic-related OCD, ERN amplitude had a significant positive correlation with age at onset of OCD symptoms. ERN amplitude in patients was unrelated to OCD symptom severity, current diagnostic status, or treatment effects. Conclusions The results provide further evidence of increased error-related brain activity in pediatric OCD. The difference in the ERN between patients with tic-related and non-tic-related OCD provides preliminary evidence of a neurobiological difference between these two OCD subtypes. The results indicate the ERN is a trait-like measure that may serve as a biomarker for non-tic-related OCD. PMID:22917203

  11. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  12. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  13. Senior High School Students' Errors on the Use of Relative Words

    Science.gov (United States)

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  14. Dispersion relations in loop calculations

    International Nuclear Information System (INIS)

    Kniehl, B.A.

    1996-01-01

    These lecture notes give a pedagogical introduction to the use of dispersion relations in loop calculations. We first derive dispersion relations which allow us to recover the real part of a physical amplitude from the knowledge of its absorptive part along the branch cut. In perturbative calculations, the latter may be constructed by means of Cutkosky's rule, which is briefly discussed. For illustration, we apply this procedure at one loop to the photon vacuum-polarization function induced by leptons as well as to the γf anti-f vertex form factor generated by the exchange of a massive vector boson between the two fermion legs. We also show how the hadronic contribution to the photon vacuum polarization may be extracted from the total cross section of hadron production in e + e - annihilation measured as a function of energy. Finally, we outline the application of dispersive techniques at the two-loop level, considering as an example the bosonic decay width of a high-mass Higgs boson. (author)

  15. Calculation error of collective effective dose of external exposure during works at 'Shelter' object

    International Nuclear Information System (INIS)

    Batij, V.G.; Derengovskij, V.V.; Kochnev, N.A.; Sizov, A.A.

    2001-01-01

    Collective effective dose (CED) error assessment is the most important task for optimal planning of works in the 'Shelter' object conditions. The main components of CED error are as follows: error in transient factor determination from exposition dose to equivalent dose; error in working hours determination in 'Shelter' object conditions; error in determination of dose rate at workplaces; additional CED error introduced by shielding of workplaces

  16. EPIC: an Error Propagation/Inquiry Code

    International Nuclear Information System (INIS)

    Baker, A.L.

    1985-01-01

    The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs

  17. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  18. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Science.gov (United States)

    2010-10-01

    ...) Challenges to determinations or an insufficient regulatory fee payment or delinquent fees should be made in writing. A challenge to a determination that a party is delinquent in paying a standard regulatory fee... 47 Telecommunication 1 2010-10-01 2010-10-01 false Error claims related to regulatory fees. 1.1167...

  19. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  20. Agriculture-related radiation dose calculations

    International Nuclear Information System (INIS)

    Furr, J.M.; Mayberry, J.J.; Waite, D.A.

    1987-10-01

    Estimates of radiation dose to the public must be made at each stage in the identification and qualification process leading to siting a high-level nuclear waste repository. Specifically considering the ingestion pathway, this paper examines questions of reliability and adequacy of dose calculations in relation to five stages of data availability (geologic province, region, area, location, and mass balance) and three methods of calculation (population, population/food production, and food production driven). Calculations were done using the model PABLM with data for the Permian and Palo Duro Basins and the Deaf Smith County area. Extra effort expended in gathering agricultural data at succeeding environmental characterization levels does not appear justified, since dose estimates do not differ greatly; that effort would be better spent determining usage of food types that contribute most to the total dose; and that consumption rate and the air dispersion factor are critical to assessment of radiation dose via the ingestion pathway. 17 refs., 9 figs., 32 tabs

  1. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  2. Numerical shoves and countershoves in electron transport calculations

    International Nuclear Information System (INIS)

    Filippone, W.L.

    1986-01-01

    The justification for applying the relatively complex (compared to S/sub n/) streaming ray (SR) algorithm to electron transport problems is its potential for doing rapid and accurate calculations. Because of the Lagrangian treatment of the cell-uncollided electrons, the only significant sources of error are the numerical treatment of the scattering kernel and the spatial differencing scheme used for the cell-collided electrons. Considerable progress has been made in reducing the former source of error. If one is willing to pay the price, the latter source of error can be reduced to any desired level by refining the mesh size or by using high-order differencing schemes. Here the method of numerical shoves and countershoves is introduced, which reduces spatial differencing errors using relatively little additional computational effort

  3. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  4. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  5. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  6. Statistical evaluation of design-error related accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy

  7. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    International Nuclear Information System (INIS)

    Kiyko, V V; Kislov, V I; Ofitserov, E N

    2015-01-01

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)

  8. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    Energy Technology Data Exchange (ETDEWEB)

    Kiyko, V V; Kislov, V I; Ofitserov, E N [A M Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation)

    2015-08-31

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)

  9. The refractive index in electron microscopy and the errors of its approximations

    Energy Technology Data Exchange (ETDEWEB)

    Lentzen, M.

    2017-05-15

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.

  10. The refractive index in electron microscopy and the errors of its approximations

    International Nuclear Information System (INIS)

    Lentzen, M.

    2017-01-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.

  11. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    Directory of Open Access Journals (Sweden)

    Hoda Divsar

    2017-03-01

    Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.

  12. Evaluation of students' knowledge about paediatric dosage calculations.

    Science.gov (United States)

    Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak

    2018-01-01

    Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it

  13. On the Spatial and Temporal Sampling Errors of Remotely Sensed Precipitation Products

    Directory of Open Access Journals (Sweden)

    Ali Behrangi

    2017-11-01

    Full Text Available Observation with coarse spatial and temporal sampling can cause large errors in quantification of the amount, intensity, and duration of precipitation events. In this study, the errors resulting from temporal and spatial sampling of precipitation events were quantified and examined using the latest version (V4 of the Global Precipitation Measurement (GPM mission integrated multi-satellite retrievals for GPM (IMERG, which is available since spring of 2014. Relative mean square error was calculated at 0.1° × 0.1° every 0.5 h between the degraded (temporally and spatially and original IMERG products. The temporal and spatial degradation was performed by producing three-hour (T3, six-hour (T6, 0.5° × 0.5° (S5, and 1.0° × 1.0° (S10 maps. The results show generally larger errors over land than ocean, especially over mountainous regions. The relative error of T6 is almost 20% larger than T3 over tropical land, but is smaller in higher latitudes. Over land relative error of T6 is larger than S5 across all latitudes, while T6 has larger relative error than S10 poleward of 20°S–20°N. Similarly, the relative error of T3 exceeds S5 poleward of 20°S–20°N, but does not exceed S10, except in very high latitudes. Similar results are also seen over ocean, but the error ratios are generally less sensitive to seasonal changes. The results also show that the spatial and temporal relative errors are not highly correlated. Overall, lower correlations between the spatial and temporal relative errors are observed over ocean than over land. Quantification of such spatiotemporal effects provides additional insights into evaluation studies, especially when different products are cross-compared at a range of spatiotemporal scales.

  14. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  15. Amplitude of Accommodation and its Relation to Refractive Errors

    Directory of Open Access Journals (Sweden)

    Abraham Lekha

    2005-01-01

    Full Text Available Aims: To evaluate the relationship between amplitude of accommodation and refractive errors in the peri-presbyopic age group. Materials and Methods: Three hundred and sixteen right eyes of 316 consecutive patients in the age group 35-50 years who attended our outpatient clinic were studied. Emmetropes, hypermetropes and myopes with best-corrected visual acuity of 6/6 J1 in both eyes were included. The amplitude of accommodation (AA was calculated by measuring the near point of accommodation (NPA. In patients with more than ± 2 diopter sphere correction for distance, the NPA was also measured using appropriate soft contact lenses. Results: There was a statistically significant difference in AA between myopes and hypermetropes ( P P P P P P >0.5. Conclusion: Our study showed higher amplitude of accommodation among myopes between 35 and 44 years compared to emmetropes and hypermetropes

  16. Evaluation of errors set-up and setting margins calculation in treatments 3-D conformal radiotherapy; Evaluacion de errores de set-up y calculo de margenes de configuracion en tratamientos de radioterapia CONFORMADA 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Donis, S.; Robayna, B.; Gonzalez, A.; Hernandez Armas, J.

    2011-07-01

    The use of IGRT techniques provide knowledge of the mistakes made in the positioning of a patient, to population studies and estimate the margins for each population.In this paper we evaluate the errors of set-up in 3 different locations and from these margins are calculated configuration (SM).

  17. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    ALPO is an Average Leaf Pair Opening (the concept of ALPO was previously introduced by us in Med. Phys. 28, 2220-2226 (2001). Therefore, dose errors associated with RLP errors are larger for fields requiring small leaf gaps. For an N-field IMRT plan, we demonstrate that the total fluence error (if we neglect inhomogeneities and scatter) is proportional to 1/√(N), where N is the number of fields, which slightly reduces the impact of RLP errors of individual fields on the total fluence error. We tested and applied the analytical apparatus in the context of commercial inverse treatment planning systems used in our clinics (Helios TM and BrainScan TM ). We determined the actual distribution of leaf-positional errors by studying MLC controller (Varian Mark II and Brainlab Novalis MLCs) log files created by the controller after each field delivery. The analytically derived relationship between fluence error and RLP errors was confirmed by numerical simulations. The equivalence of relative fluence error to relative dose error was verified by a direct dose calculation. We also experimentally verified the truthfulness of fluences derived from the log file data by comparing them to film data

  18. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  19. Calculating method on human error probabilities considering influence of management and organization

    International Nuclear Information System (INIS)

    Gao Jia; Huang Xiangrui; Shen Zupei

    1996-01-01

    This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

  20. Task types and error types involved in the human-related unplanned reactor trip events

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2008-01-01

    In this paper, the contribution of task types and error types involved in the human-related unplanned reactor trip events that have occurred between 1986 and 2006 in Korean nuclear power plants are analysed in order to establish a strategy for reducing the human-related unplanned reactor trips. Classification systems for the task types, error modes, and cognitive functions are developed or adopted from the currently available taxonomies, and the relevant information is extracted from the event reports or judged on the basis of an event description. According to the analyses from this study, the contributions of the task types are as follows: corrective maintenance (25.7%), planned maintenance (22.8%), planned operation (19.8%), periodic preventive maintenance (14.9%), response to a transient (9.9%), and design/manufacturing/installation (6.9%). According to the analysis of the error modes, error modes such as control failure (22.2%), wrong object (18.5%), omission (14.8%), wrong action (11.1%), and inadequate (8.3%) take up about 75% of the total unplanned trip events. The analysis of the cognitive functions involved in the events indicated that the planning function had the highest contribution (46.7%) to the human actions leading to unplanned reactor trips. This analysis concludes that in order to significantly reduce human-induced or human-related unplanned reactor trips, an aide system (in support of maintenance personnel) for evaluating possible (negative) impacts of planned actions or erroneous actions as well as an appropriate human error prediction technique, should be developed

  1. Task types and error types involved in the human-related unplanned reactor trip events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-12-15

    In this paper, the contribution of task types and error types involved in the human-related unplanned reactor trip events that have occurred between 1986 and 2006 in Korean nuclear power plants are analysed in order to establish a strategy for reducing the human-related unplanned reactor trips. Classification systems for the task types, error modes, and cognitive functions are developed or adopted from the currently available taxonomies, and the relevant information is extracted from the event reports or judged on the basis of an event description. According to the analyses from this study, the contributions of the task types are as follows: corrective maintenance (25.7%), planned maintenance (22.8%), planned operation (19.8%), periodic preventive maintenance (14.9%), response to a transient (9.9%), and design/manufacturing/installation (6.9%). According to the analysis of the error modes, error modes such as control failure (22.2%), wrong object (18.5%), omission (14.8%), wrong action (11.1%), and inadequate (8.3%) take up about 75% of the total unplanned trip events. The analysis of the cognitive functions involved in the events indicated that the planning function had the highest contribution (46.7%) to the human actions leading to unplanned reactor trips. This analysis concludes that in order to significantly reduce human-induced or human-related unplanned reactor trips, an aide system (in support of maintenance personnel) for evaluating possible (negative) impacts of planned actions or erroneous actions as well as an appropriate human error prediction technique, should be developed.

  2. Error-related negativities during spelling judgments expose orthographic knowledge.

    Science.gov (United States)

    Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin

    2014-02-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Human errors evaluation for muster in emergency situations applying human error probability index (HEPI, in the oil company warehouse in Hamadan City

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.

  4. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  5. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  6. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  7. Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods

    International Nuclear Information System (INIS)

    Baker, A.R.

    1982-07-01

    A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)

  8. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  9. Variance and covariance calculations for nuclear materials accounting using ''MAVARIC''

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-07-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  10. Variance and covariance calculations for nuclear materials accounting using 'MAVARIC'

    International Nuclear Information System (INIS)

    Nasseri, K.K.

    1987-01-01

    Determination of the detection sensitivity of a materials accounting system to the loss of special nuclear material (SNM) requires (1) obtaining a relation for the variance of the materials balance by propagation of the instrument errors for the measured quantities that appear in the materials balance equation and (2) substituting measured values and their error standard deviations into this relation and calculating the variance of the materials balance. MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet, designed using the second release of Lotus 1-2-3, that significantly reduces the effort required to make the necessary variance (and covariance) calculations needed to determine the detection sensitivity of a materials accounting system. Predefined macros within the spreadsheet allow the user to carry out long, tedious procedures with only a few keystrokes. MAVARIC requires that the user enter the following data into one of four data tables, depending on the type of the term in the materials balance equation; the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements made during an accounting period. The user can also specify if there are correlations between transfer terms. Based on these data entries, MAVARIC can calculate the variance of the materials balance and the square root of this variance, from which the detection sensitivity of the accounting system can be determined

  11. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  12. Design and application of location error teaching aids in measuring and visualization

    Directory of Open Access Journals (Sweden)

    Yu Fengning

    2015-01-01

    Full Text Available As an abstract concept, ‘location error’ in is considered to be an important element with great difficult to understand and apply. The paper designs and develops an instrument to measure the location error. The location error is affected by different position methods and reference selection. So we choose position element by rotating the disk. The tiny movement transfers by grating ruler and programming by PLC can show the error on text display, which also helps students understand the position principle and related concepts of location error. After comparing measurement results with theoretical calculations and analyzing the measurement accuracy, the paper draws a conclusion that the teaching aid owns reliability and a promotion of high value.

  13. Uncertainty of decay heat calculations originating from errors in the nuclear data and the yields of individual fission products

    International Nuclear Information System (INIS)

    Rudstam, G.

    1979-01-01

    The calculation of the abundance pattern of the fission products with due account taken of feeding from the fission of 235 U, 238 U, and 239 Pu, from the decay of parent nuclei, from neutron capture, and from delayed-neutron emission is described. By means of the abundances and the average beta and gamma energies the decay heat in nuclear fuel is evaluated along with its error derived from the uncertainties of fission yields and nuclear properties of the inddividual fission products. (author)

  14. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  15. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  16. Application of nomograms to calculate radiography parameters

    International Nuclear Information System (INIS)

    Voronin, S.A.; Orlov, K.P.; Petukhov, V.I.; Khomchenkov, Yu.F.; Meshalkin, I.A.; Grachev, A.V.; Akopov, V.'S.; Majorov, A.N.

    1979-01-01

    The method of calculation of radiography parameters with the help of nomograms usable for practical application under laboratory and industrial conditions, is proposed. Nomograms are developed for determining the following parameters: relative sensitivity, general non-definition of image, permissible difference of blackening density between the centre and edge of the picture (ΔD), picture contrast, focus distance, item thickness, radiation-physical parameter, dose build up factor, groove dimension and error. An experimental test has been carried out for evaluating the results, obtained with nomograms. Steel items from 25 to 79 mm thick have been subjected to testing 191 Ir has been used as a source. Comparison of calculation and experimental results has shown the discrepancy in sensitivity values, caused by ΔDsub(min) apriori index and the error, inherent in graphical plotting on a nomogram

  17. Errores innatos del metabolismo de las purinas y otras enfermedades relacionadas Inborn purine metabolism errors and other related diseases

    Directory of Open Access Journals (Sweden)

    Jiovanna Contreras Roura

    2012-06-01

    growth, recurrent infections, self-mutilation, immunodeficiencies, unexplainable haemolytic anemia, gout-related arthritis, family history, consanguinity and adverse reactions to those drugs that are analogous of purines. The study of these diseases generally begins by quantifying serum uric acid and uric acid present in the urine which is the final product of purine metabolism in human beings. Diet and drug consumption are among the pathological, physiological and clinical conditions capable of changing the level of this compound. This review was intended to disseminate information on the inborn purine metabolism errors as well as to facilitate the interpretation of the uric acid levels and other biochemical markers making the diagnosis of these diseases possible. The tables relating these diseases to the excretory levels of uric acid and other biochemical markers, the altered enzymes, the clinical symptoms, the model of inheritance, and in some cases, the suggested treatment. This paper allowed us to affirm that variations in the uric acid levels and the presence of other biochemical markers in urine are important tools in screening some inborn purine metabolism errors, and also other related pathological conditions.

  18. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  19. Sensitivity of trajectory calculations to the temporal frequency of wind data

    Science.gov (United States)

    Doty, Kevin G.; Perkey, Donald J.

    1993-01-01

    A mesoscale primitive equation model is used to create a 36-h simulation of the three-dimensional wind field of an intense maritime extratropical cyclone. The control experiment uses the simulated wind field every 15 min in a trajectory model to calculate back trajectories from various horizontal and vertical positions of interest relative to synoptic features of the storm. The latter trajectories are compared to trajectories that were calculated with the simulated wind data degraded in time to 30 min, 1 h, 3 h, 6h, and 12 h. Various error statistics reveal significant deterioration in trajectory accuracy between trajectories calculated with 1- and 3-h data frequencies. Trajectories calculated with 15-min, 30-min, and 1-h data frequencies yielded similar results, while trajectories calculated with data time frequencies 3 h and greater yielded results with unacceptably large errors.

  20. Undergraduate paramedic students cannot do drug calculations

    Science.gov (United States)

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067

  1. Undergraduate paramedic students cannot do drug calculations.

    Science.gov (United States)

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.

  2. Propagation of errors from a null balance terahertz reflectometer to a sample's relative water content

    International Nuclear Information System (INIS)

    Hadjiloucas, S; Walker, G C; Bowen, J W; Zafiropoulos, A

    2009-01-01

    The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.

  3. Error-related negativity varies with the activation of gender stereotypes.

    Science.gov (United States)

    Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin

    2008-09-19

    The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.

  4. Examination of the program to avoid round-off error

    International Nuclear Information System (INIS)

    Shiota, Y.; Kusunoki, T.; Tabushi, K.; Shimomura, K.; Kitou, S.

    2005-01-01

    The MACRO programs which express a simple shape such as PLANE, SPHERE, CYLINDER and CONE, are used to the formation of the geometry in EGS4. Each MACRO calculates the important value for the main code to recognize the configured geometry. This calculation process may generate the calculation error due to the effect of a round-off error. SPHERE, CYLINDER and CONE MACRO include the function to avoid the effect, but PLANE MACRO dose not include. The effect of the round-off error is small usually in case of PLANE MACRO, however a slant plane may cause the expansion of the effect. Therefore, we have configured the DELPLANE program with the function to avoid the effect of the round-off error. In this study, we examine the DELPLANE program using the simply geometry with slant plane. As a result, the normal PLANE MACRO generates the round-off error, however DELPLANE program dose not generates one. (author)

  5. Approaches to relativistic positioning around Earth and error estimations

    Science.gov (United States)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  6. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  7. The content of lexical stimuli and self-reported physiological state modulate error-related negativity amplitude.

    Science.gov (United States)

    Benau, Erik M; Moelter, Stephen T

    2016-09-01

    The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  9. Software platform for managing the classification of error- related potentials of observers

    Science.gov (United States)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  10. A Relative View on Tracking Error

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried); I. Pouchkarev (Igor)

    2005-01-01

    textabstractWhen delegating an investment decisions to a professional manager, investors often anchor their mandate to a specific benchmark. The manager’s exposure to risk is controlled by means of a tracking error volatility constraint. It depends on market conditions whether this constraint is

  11. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    Science.gov (United States)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  12. Error-related ERP components and individual differences in punishment and reward sensitivity

    NARCIS (Netherlands)

    Boksem, Maarten A. S.; Tops, Mattie; Wester, Anne E.; Meijman, Theo F.; Lorist, Monique M.

    2006-01-01

    Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on

  13. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  14. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Directory of Open Access Journals (Sweden)

    Małgorzata Kossowska

    2018-03-01

    Full Text Available Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure. We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400 due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure, religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure.

  15. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  16. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure). PMID:29636709

  17. Calculation and Analysis of Differential Corrections for BeiDou

    Science.gov (United States)

    Yang, Sainan; Chen, Junping; Zhang, Yize

    2015-04-01

    BeiDou Satellite Navigation System has been providing service forAsia-Pacific area. BeiDou uses observations of regional monitoring network to determine satellite orbit, which limits the satellite orbit accuracy. And the satellite clock error is produced by time synchronization system. The time synchronization delay of antenna device is general obtained through prior Calibration, and the residual calibration error is included in the satellite clock, which affects the prediction accuracy of satellite clock error. In this paper, we study the algorithms of Beidou differential corrections to improve the accuracy of satellite signals to improve the user positioning accuracy. In this algorithm, both pseudo-range and phase observations are used to calculate differential corrections. We process pseudo-range observations to obtain equivalent satellite clock error, which include satellite clock errors and orbit radial errors, as well as the average projection of orbit tangential and normal errors in combination. And the epoch-difference of phase observations are processed to eliminate the ambiguity which simplifies algorithms and ensure the relative accuracy (corrections variety between the epochs). Observations more than 10 stations in China are processed, and the equivalent clock error calculation results are analyzed, which shows that the satellite UDRE are significantly reduced and user location accuracy improves when the equivalent clock error corrections are applied. The residuals deducting equivalent satellite clock error contains the projection difference of satellite orbit error in all station (tangential and normal errors are main). We utilize the residuals to solve the tangential and normal orbit errors which cause the projection difference. The same observation data is processed. The results show that after calculating three-dimensional corrections, the satellite UDRE doesn't improve significantly compared to equivalent satellite clock error corrections and user

  18. Evidence for specificity of the impact of punishment on error-related brain activity in high versus low trait anxious individuals.

    Science.gov (United States)

    Meyer, Alexandria; Gawlowska, Magda

    2017-10-01

    A previous study suggests that when participants were punished with a loud noise after committing errors, the error-related negativity (ERN) was enhanced in high trait anxious individuals. The current study sought to extend these findings by examining the ERN in conditions when punishment was related and unrelated to error commission as a function of individual differences in trait anxiety symptoms; further, the current study utilized an electric shock as an aversive unconditioned stimulus. Results confirmed that the ERN was increased when errors were punished among high trait anxious individuals compared to low anxious individuals; this effect was not observed when punishment was unrelated to errors. Findings suggest that the threat-value of errors may underlie the association between certain anxious traits and punishment-related increases in the ERN. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    Science.gov (United States)

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  20. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    Science.gov (United States)

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  1. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  2. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  3. Ground-Wave Propagation Effects on Transmission Lines through Error Images

    Directory of Open Access Journals (Sweden)

    Uribe-Campos Felipe Alejandro

    2014-07-01

    Full Text Available Electromagnetic transient calculation of overhead transmission lines is strongly influenced by the natural resistivity of the ground. This varies from 1-10K (Ω·m depending on several media factors and on the physical composition of the ground. The accuracy on the calculation of a system transient response depends in part in the ground return model, which should consider the line geometry, the electrical resistivity and the frequency dependence of the power source. Up to date, there are only a few reports on the specialized literature about analyzing the effects produced by the presence of an imperfectly conducting ground of transmission lines in a transient state. A broad range analysis of three of the most often used ground-return models for calculating electromagnetic transients of overhead transmission lines is performed in this paper. The behavior of modal propagation in ground is analyzed here into effects of first and second order. Finally, a numerical tool based on relative error images is proposed in this paper as an aid for the analyst engineer to estimate the incurred error by using approximate ground-return models when calculating transients of overhead transmission lines.

  4. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Science.gov (United States)

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  5. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Directory of Open Access Journals (Sweden)

    Zrinka Sosic-Vasic

    Full Text Available The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI. A second strong positive correlation was observed in the anterior cingulate gyrus (ACC. Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  6. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  7. Base data for looking-up tables of calculation errors in JACS code system

    International Nuclear Information System (INIS)

    Murazaki, Minoru; Okuno, Hiroshi

    1999-03-01

    The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)

  8. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    Science.gov (United States)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  9. Errors in determination of irregularity factor for distributed parameters in a reactor core

    International Nuclear Information System (INIS)

    Vlasov, V.A.; Zajtsev, M.P.; Il'ina, L.I.; Postnikov, V.V.

    1988-01-01

    Two types errors (measurement error and error of regulation of reactor core distributed parameters), offen met during high-power density reactor operation, are analyzed. Consideration is given to errors in determination of irregularity factor for radial power distribution for a hot channel under conditions of its minimization and for the conditions when the regulation of relative power distribution is absent. The first regime is investigated by the method of statistic experiment using the program of neutron-physical calculation optimization taking as an example a large channel water cooled graphite moderated reactor. It is concluded that it is necessary, to take into account the complex interaction of measurement error with the error of parameter profiling over the core both for conditions of continuous manual or automatic parameter regulation (optimization) and for the conditions without regulation namely at a priore equalized distribution. When evaluating the error of distributed parameter control

  10. Social Errors in Four Cultures: Evidence about Universal Forms of Social Relations.

    Science.gov (United States)

    Fiske, Alan Page

    1993-01-01

    To test the cross-cultural generality of relational-models theory, 4 studies with 70 adults examined social errors of substitution of persons for Bengali, Korean, Chinese, and Vai (Liberia and Sierra Leone) subjects. In all four cultures, people tend to substitute someone with whom they have the same basic relationship. (SLD)

  11. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 and Department of Physics, Oklahoma State University, Stillwater, Oklahoma 74078-3072 (United States); Johnson, Randall; Larson, Gary [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 (United States)

    2016-06-15

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  12. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    International Nuclear Information System (INIS)

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-01-01

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  13. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  14. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  16. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    Directory of Open Access Journals (Sweden)

    Michael J Larson

    2013-07-01

    Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  17. Analysis of the methodical component of core power density field calculation error on the basis of Mochovce-1 commissioning tests

    International Nuclear Information System (INIS)

    Brik, A.

    2009-01-01

    In the first decade of June 2008, during the power commissioning of the reactor at the Mochovce NPP unit 1, the experiment with reducing the thermal power of core almost to the balance-of-plant (BOP) needs was performed. After the reactor has operated for seven hours at low power (about 200 220 MW (thermal)), its power was increased (at a rate of about 0.25% of N nom /min) to the initial level, close to 107% (1471 MW). During the experiment, core parameters, which were subsequently used for comparing the measured data with the results of experiment simulation calculations, were recorded in the reactor in-core monitoring system database. Calculated and measured levels of critical concentrations of boric acid were compared, along with power density distributions by fuel elements and assemblies obtained both by the KRUIZ in-core monitoring system and on the basis of calculations simulating reactor operation in accordance with the given core power variation schedule. The final stage consisted of assessing the methodical component of power density micro- and macro-fields calculation error in the core of Mochovce-1 reactor operating with varying load. (author)

  18. Analysis of the methodical component of core power density field calculation error on the basis of Mochovce-1 commissioning tests

    International Nuclear Information System (INIS)

    Brik, A.

    2009-01-01

    In the first decade of June 2008, during the power commissioning of the reactor at Mochovce NPP unit 1, the experiment with reducing the thermal power of core almost to the balance-of-plant needs was performed. After the reactor has operated for seven hours at low power (about 200 220 MW (thermal)), its power was increased (at a rate of about 0.25% of N nom /min) to the initial level, close to 107% (1471 MW). During the experiment, core parameters, which were subsequently used for comparing the measured data with the results of experiment simulation calculations, were recorded in the reactor in-core monitoring system's database. Calculated and measured levels of critical concentrations of boric acid were compared, along with power density distributions by fuel elements and assemblies obtained both by the KRUIZ in-core monitoring system and on the basis of calculations simulating reactor operation in accordance with the given core power variation schedule. The final stage consisted of assessing the methodical component of power density micro- and macro-fields' calculation error in the core of Mochovce-1 reactor operating with varying load. (Authors)

  19. Calculation and analysis of thermodynamic relations for superconductors

    International Nuclear Information System (INIS)

    Nazarenko, A.B.

    1989-01-01

    The absorption coefficients of high-frequency and low-frequency sound have been calculated on the basis of the Ginzburg-Landau theory. This sound is a wave of periodic adiabatic bulk compressions and rarefactions of the frequency ω in an isotropic superconductor near the transition temperature. Thermodynamic relations have been obtained for abrupt changes in the physical quantities produced as a result of a transition from the normal state to the superconducting state. These relations are similar to the Ehrenfest relations. The above--mentioned thermodynamic quantities are compared with the published experimental results on YBa 2 Cu 3 O 7-δ . The experiments on the absorption of ultrasound in recently discovered superconductors mainformation on the phase transition type and thermodynamic relations for these superconductors, in particular, the T c -vs-dp curve. Similar calculations have been carried out for 2 He-transition experiments with ferromagnetic materials. The order parameter in the thermodynamic potential was assumed to be isotropic

  20. A Paleolatitude Calculator for Paleoclimate Studies.

    Directory of Open Access Journals (Sweden)

    Douwe J J van Hinsbergen

    Full Text Available Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1 restore the relative motions between plates based on (marine magnetic anomalies, and (2 reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km. This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high

  1. Statistical evaluation of design-error related nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1981-01-01

    In this paper, general methodology for the statistical evaluation of design-error related accidents is proposed that can be applied to a variety of systems that evolves during the development of large-scale technologies. The evaluation aims at an estimate of the combined ''residual'' frequency of yet unknown types of accidents ''lurking'' in a certain technological system. A special categorization in incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of U.S. nuclear power reactor technology, considering serious accidents (category 2 events) that involved, in the accident progression, a particular design inadequacy. 9 refs

  2. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  3. User interface for MAWST limit of error program

    International Nuclear Information System (INIS)

    Crain, B. Jr.

    1991-01-01

    This paper reports on a user-friendly interface which is being developed to aid in preparation of input data for the Los Alamos National Laboratory software module MAWST (Materials Accounting With Sequential Testing) used at Savannah River Site to propagate limits of error for facility material balances. The forms-based interface is being designed using traditional software project management tools and using the Ingres family of database management and application development products (products of Relational Technology, Inc.). The software will run on VAX computers (products of Digital Equipment Corporation) on which the VMS operating system and Ingres database management software are installed. Use of the interface software will reduce time required to prepare input data for calculations and also reduce errors associated with data preparation

  4. A framework to estimate probability of diagnosis error in NPP advanced MCR

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Kim, Jong Hyun; Jang, Inseok; Seong, Poong Hyun

    2018-01-01

    Highlights: •As new type of MCR has been installed in NPPs, the work environment is considerably changed. •A new framework to estimate operators’ diagnosis error probabilities should be proposed. •Diagnosis error data were extracted from the full-scope simulator of the advanced MCR. •Using Bayesian inference, a TRC model was updated for use in advanced MCR. -- Abstract: Recently, a new type of main control room (MCR) has been adopted in nuclear power plants (NPPs). The new MCR, known as the advanced MCR, consists of digitalized human-system interfaces (HSIs), computer-based procedures (CPS), and soft controls while the conventional MCR includes many alarm tiles, analog indicators, hard-wired control devices, and paper-based procedures. These changes significantly affect the generic activities of the MCR operators, in relation to diagnostic activities. The aim of this paper is to suggest a framework to estimate the probabilities of diagnosis errors in the advanced MCR by updating a time reliability correlation (TRC) model. Using Bayesian inference, the TRC model was updated with the probabilities of diagnosis errors. Here, the diagnosis error data were collected from a full-scope simulator of the advanced MCR. To do this, diagnosis errors were determined based on an information processing model and their probabilities were calculated. However, these calculated probabilities of diagnosis errors were largely affected by context factors such as procedures, HSI, training, and others, known as PSFs (Performance Shaping Factors). In order to obtain the nominal diagnosis error probabilities, the weightings of PSFs were also evaluated. Then, with the nominal diagnosis error probabilities, the TRC model was updated. This led to the proposal of a framework to estimate the nominal probabilities of diagnosis errors in the advanced MCR.

  5. 45 CFR 98.100 - Error Rate Report.

    Science.gov (United States)

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... the total dollar amount of payments made in the sample); the average amount of improper payment; and... not received. (e) Costs of Preparing the Error Rate Report—Provided the error rate calculations and...

  6. The effect of rock electrical parameters on the calculation of reservoir saturation

    International Nuclear Information System (INIS)

    Li, Xiongyan; Qin, Ruibao; Liu, Chuncheng; Mao, Zhiqiang

    2013-01-01

    The error in calculating a reservoir saturation caused by the error in the cementation exponent, m, and the saturation exponent, n, should be analysed. In addition, the influence of m and n on the reservoir saturation should be discussed. Based on the Archie formula, the effect of variables m and n on the reservoir saturation is analysed, while the formula for the error in calculating the reservoir saturation, caused by the error in m and n, is deduced, and the main factors affecting the error in reservoir saturation are illustrated. According to the physical meaning of m and n, it can be interpreted that they are two independent parameters, i.e., there is no connection between m and n. When m and n have the same error, the impact of the variables on the calculation of the reservoir saturation should be compared. Therefore, when the errors of m and n are respectively equal to 0.2, 0.4 and 0.6, the distribution range of the errors in calculating the reservoir saturation is analysed. However, in most cases, the error of m and n is about 0.2. When the error of m is 0.2, the error in calculating the reservoir saturation ranges from 0% to 35%. Meanwhile, when the error in n is 0.2, the error in calculating the reservoir saturation is almost always below 5%. On the basis of loose sandstone, medium sandstone, tight sandstone, conglomerate, tuff, breccia, basalt, andesite, dacite and rhyolite, this paper first analyses the distribution range and change amplitude of m and n. Second, the impact of m and n on the calculation of reservoir saturation is elaborated upon. With regard to each lithology, the distribution range and change amplitude of m are greater than those of n. Therefore, compared with n, the effect of m on the reservoir saturation is stronger. The influence of m and n on the reservoir saturation is determined, and the error in calculating the reservoir saturation caused by the error of m and n is calculated. This is theoretically and practically significant for

  7. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  8. Empirical study of the GARCH model with rational errors

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2013-01-01

    We use the GARCH model with a fat-tailed error distribution described by a rational function and apply it to stock price data on the Tokyo Stock Exchange. To determine the model parameters we perform Bayesian inference to the model. Bayesian inference is implemented by the Metropolis-Hastings algorithm with an adaptive multi-dimensional Student's t-proposal density. In order to compare our model with the GARCH model with the standard normal errors, we calculate the information criteria AIC and DIC, and find that both criteria favor the GARCH model with a rational error distribution. We also calculate the accuracy of the volatility by using the realized volatility and find that a good accuracy is obtained for the GARCH model with a rational error distribution. Thus we conclude that the GARCH model with a rational error distribution is superior to the GARCH model with the normal errors and it can be used as an alternative GARCH model to those with other fat-tailed distributions

  9. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of -0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root...... validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder......-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose...

  10. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  11. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  12. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    Science.gov (United States)

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.

  13. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  14. Self-reported and observed punitive parenting prospectively predicts increased error-related brain activity in six-year-old children

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.

    2017-01-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to

  15. Differences among Job Positions Related to Communication Errors at Construction Sites

    Science.gov (United States)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  16. A Python tool to set up relative free energy calculations in GROMACS.

    Science.gov (United States)

    Klimovich, Pavel V; Mobley, David L

    2015-11-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper (LOMAP; Liu et al. in J Comput Aided Mol Des 27(9):755-770, 2013), recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge (Mobley et al. in J Comput Aided Mol Des 28(4):135-150, 2014). Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations.

  17. Yaw Angle Error Compensation for Airborne 3-D SAR Based on Wavenumber-domain Subblock

    Directory of Open Access Journals (Sweden)

    Ding Zhen-yu

    2015-08-01

    Full Text Available Airborne array antenna SAR is used to obtain three-dimensional imaging; however it is impaired by motion errors. In particular, rotation error changes the relative position among the different antenna units and strongly affects the image quality. Unfortunately, the presently available algorithm can not compensate for the rotation error. In this study, an airborne array antenna SAR three-dimensional imaging model is discussed along with the effect of rotation errors, and more specifically, the yaw angle error. The analysis reveals that along- and cross-track wavenumbers can be obtained from the echo phase, and when used to calculate the range error, these wavenumbers lead to a target position irrelevant result that eliminates the error's spatial variance. Therefore, a wavenumber-domain subblock compensation method is proposed by computing the range error in the subblock of the along- and cross-track 2-D wavenumber domain and precisely compensating for the error in the space domain. Simulations show that the algorithm can compensate for the effect of yaw angle error.

  18. Calculating excess lifetime risk in relative risk models

    International Nuclear Information System (INIS)

    Vaeth, M.; Pierce, D.A.

    1990-01-01

    When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate

  19. Calculation of atomic integrals using commutation relations

    International Nuclear Information System (INIS)

    Zamastil, J.; Vinette, F.; Simanek, M.

    2007-01-01

    In this paper, a numerically stable method of calculating atomic integrals is suggested. The commutation relations among the components of the angular momentum and the Runge-Lenz vector are used to deduce recurrence relations for the Sturmian radial functions. The radial part of the one- and two-electron integrals is evaluated by means of these recurrence relations. The product of two radial functions is written as a linear combination of the radial functions. This enables us to write the integrals over four radial functions as a linear combination of the integrals over two radial functions. The recurrence relations for the functions are used to derive the recursion relations for the coefficients of the linear combination and for the integrals over two functions

  20. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  1. Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism

    Science.gov (United States)

    Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.

    2018-05-01

    The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.

  2. User Performance Evaluation of Four Blood Glucose Monitoring Systems Applying ISO 15197:2013 Accuracy Criteria and Calculation of Insulin Dosing Errors.

    Science.gov (United States)

    Freckmann, Guido; Jendrike, Nina; Baumstark, Annette; Pleus, Stefan; Liebing, Christina; Haug, Cornelia

    2018-04-01

    The international standard ISO 15197:2013 requires a user performance evaluation to assess if intended users are able to obtain accurate blood glucose measurement results with a self-monitoring of blood glucose (SMBG) system. In this study, user performance was evaluated for four SMBG systems on the basis of ISO 15197:2013, and possibly related insulin dosing errors were calculated. Additionally, accuracy was assessed in the hands of study personnel. Accu-Chek ® Performa Connect (A), Contour ® plus ONE (B), FreeStyle Optium Neo (C), and OneTouch Select ® Plus (D) were evaluated with one test strip lot. After familiarization with the systems, subjects collected a capillary blood sample and performed an SMBG measurement. Study personnel observed the subjects' measurement technique. Then, study personnel performed SMBG measurements and comparison measurements. Number and percentage of SMBG measurements within ± 15 mg/dl and ± 15% of the comparison measurements at glucose concentrations performed by lay-users. The study was registered at ClinicalTrials.gov (NCT02916576). Ascensia Diabetes Care Deutschland GmbH.

  3. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  4. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  5. Convergence of highly parallel stray field calculation using the fast multipole method on irregular meshes

    Science.gov (United States)

    Palmesi, P.; Abert, C.; Bruckner, F.; Suess, D.

    2018-05-01

    Fast stray field calculation is commonly considered of great importance for micromagnetic simulations, since it is the most time consuming part of the simulation. The Fast Multipole Method (FMM) has displayed linear O(N) parallelization behavior on many cores. This article investigates the error of a recent FMM approach approximating sources using linear—instead of constant—finite elements in the singular integral for calculating the stray field and the corresponding potential. After measuring performance in an earlier manuscript, this manuscript investigates the convergence of the relative L2 error for several FMM simulation parameters. Various scenarios either calculating the stray field directly or via potential are discussed.

  6. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  7. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  8. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  9. Did I Do That? Expectancy Effects of Brain Stimulation on Error-related Negativity and Sense of Agency.

    Science.gov (United States)

    Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel

    2018-06-19

    This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.

  10. Test of Mie-based single-scattering properties of non-spherical dust aerosols in radiative flux calculations

    International Nuclear Information System (INIS)

    Fu, Q.; Thorsen, T.J.; Su, J.; Ge, J.M.; Huang, J.P.

    2009-01-01

    We simulate the single-scattering properties (SSPs) of dust aerosols with both spheroidal and spherical shapes at a wavelength of 0.55 μm for two refractive indices and four effective radii. Herein spheres are defined by preserving both projected area and volume of a non-spherical particle. It is shown that the relative errors of the spheres to approximate the spheroids are less than 1% in the extinction efficiency and single-scattering albedo, and less than 2% in the asymmetry factor. It is found that the scattering phase function of spheres agrees with spheroids better than the Henyey-Greenstein (HG) function for the scattering angle range of 0-90 o . In the range of ∼90-180 o , the HG function is systematically smaller than the spheroidal scattering phase function while the spherical scattering phase function is smaller from ∼90 o to 145 o but larger from ∼145 o to 180 o . We examine the errors in reflectivity and absorptivity due to the use of SSPs of equivalent spheres and HG functions for dust aerosols. The reference calculation is based on the delta-DISORT-256-stream scheme using the SSPs of the spheroids. It is found that the errors are mainly caused by the use of the HG function instead of the SSPs for spheres. By examining the errors associated with the delta-four- and delta-two-stream schemes using various approximate SSPs of dust aerosols, we find that the errors related to the HG function dominate in the delta-four-stream results, while the errors related to the radiative transfer scheme dominate in the delta-two-stream calculations. We show that the relative errors in the global reflectivity due to the use of sphere SSPs are always less than 5%. We conclude that Mie-based SSPs of non-spherical dust aerosols are well suited in radiative flux calculations.

  11. An error-related negativity potential investigation of response monitoring function in individuals with Internet addiction disorder

    Directory of Open Access Journals (Sweden)

    Zhenhe eZhou

    2013-09-01

    Full Text Available Internet addiction disorder (IAD is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task.23 subjects were recruited as IAD group. 23 matched age, gender and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials (ERPs. IAD group made more total error rates than did controls (P < 0.01; Reactive times for total error responses in IAD group were shorter than did controls (P < 0.01. The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all P < 0.01. These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder.

  12. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  13. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  14. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  15. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  16. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  17. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  18. [Evaluation of administration errors of injectable drugs in neonatology].

    Science.gov (United States)

    Cherif, A; Sayadi, M; Ben Hmida, H; Ben Ameur, K; Mestiri, K

    2015-11-01

    Use of injectable drugs in newborns represents more than 90% of prescriptions and requires special precautions in order to ensure more safety and efficiency. The aim of this study is to gather errors relating to the administration of injectable drugs and to suggest corrective actions. This descriptive and transversal study has evaluated 300 injectable drug administrations in a neonatology unit. Two hundred and sixty-one administrations have contained an error. Data are collected by direct observations of administrative act. Errors observed are: an inappropriate mixture (2.6% of cases); an incorrect delivery rate (33.7% of cases); incorrect dilutions (26.7% of cases); error in calculation of the dose to be injected (16.7% of cases); error while sampling small volumes (6.3% of cases); error or omission of administration schedule (1% of cases). These data have enabled us to evaluate administration of injectable drugs in neonatology. Different types of errors observed could be a source of therapeutic inefficiency, extended lengths of stay or iatrogenic drug. Following these observations, corrective actions have been undertaken by pharmacists and consist of: organizing training sessions for nursing; developing an explanatory guide for dilution and administration of injectable medicines, which was made available to the clinical service. Collaborative strategies doctor-nurse-pharmacist can help to reduce errors in the medication process especially during his administration. It permits improvement of injectable drugs use, offering more security and better efficiency and contribute to guarantee ideal therapy for patients. Copyright © 2015. Published by Elsevier Masson SAS.

  19. An individual differences approach to multiple-target visual search errors: How search errors relate to different characteristics of attention.

    Science.gov (United States)

    Adamo, Stephen H; Cain, Matthew S; Mitroff, Stephen R

    2017-12-01

    A persistent problem in visual search is that searchers are more likely to miss a target if they have already found another in the same display. This phenomenon, the Subsequent Search Miss (SSM) effect, has remained despite being a known issue for decades. Increasingly, evidence supports a resource depletion account of SSM errors-a previously detected target consumes attentional resources leaving fewer resources available for the processing of a second target. However, "attention" is broadly defined and is composed of many different characteristics, leaving considerable uncertainty about how attention affects second-target detection. The goal of the current study was to identify which attentional characteristics (i.e., selection, limited capacity, modulation, and vigilance) related to second-target misses. The current study compared second-target misses to an attentional blink task and a vigilance task, which both have established measures that were used to operationally define each of four attentional characteristics. Second-target misses in the multiple-target search were correlated with (1) a measure of the time it took for the second target to recovery from the blink in the attentional blink task (i.e., modulation), and (2) target sensitivity (d') in the vigilance task (i.e., vigilance). Participants with longer recovery and poorer vigilance had more second-target misses in the multiple-target visual search task. The results add further support to a resource depletion account of SSM errors and highlight that worse modulation and poor vigilance reflect a deficit in attentional resources that can account for SSM errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    Science.gov (United States)

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  1. RP-10: commissioning. Reproduction by physical experiences calculation

    International Nuclear Information System (INIS)

    Higa, Manabu; Madariaga, M.R.

    1990-01-01

    This work presents the neutronic calculation results, most of which were carried out after such experiences, to verify the calculation methodology developed at the Analysis and Calculation Department of the National Atomic Energy Commission (CNEA). The results obtained were satisfactory, proving that the calculation methodology used is adequate for the design of this type of reactors. The only important disagreement is to evaluate the reactivity excess and cut reactivity, but this responds to a criterion difference and/or that of definition for these parameters. The positions of criticality with errors lower than 100 pcm were predicted. The differential and integral reactivities for the calibration of bars, as well as the flux distribution, are reproduced in a reasonable degree in relation to differences inferior to 10%. (Author) [es

  2. Electrophysiological Endophenotypes and the Error-Related Negativity (ERN) in Autism Spectrum Disorder: A Family Study

    Science.gov (United States)

    Clawson, Ann; South, Mikle; Baldwin, Scott A.; Larson, Michael J.

    2017-01-01

    We examined the error-related negativity (ERN) as an endophenotype of ASD by comparing the ERN in families of ASD probands to control families. We hypothesized that ASD probands and families would display reduced-amplitude ERN relative to controls. Participants included 148 individuals within 39 families consisting of a mother, father, sibling,…

  3. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  4. The value of pulmonary vessel CT measuring and calculating of relative ratio

    International Nuclear Information System (INIS)

    Ji Jiansong; Xu Xiaoxiong; Lv Suzhen; Zhao Zhongwei; Wang Zufei; Xu Min; Gong Jianping

    2004-01-01

    Objective: To evaluate value of CT measurement and calculation of vessels of isolate pig lung, by compare with measurement and calculation of resin cast of them. Methods: CT scanned and measured the four isolated pig lung which vessels were full with ABS liquid or self-solidification resin liquid, and calculate the relative ratio of superior/inferior order and vein/artery of same order. After resin cast were made, measure and calculate the same as CT did. Results: Second order of calculation of vein/artery of same order by the two method had statistic difference (P 0.05). Conclusion: CT has high value in calculation of the relative ratio of superior/inferior order

  5. Enhanced error related negativity amplitude in medication-naïve, comorbidity-free obsessive compulsive disorder.

    Science.gov (United States)

    Nawani, Hema; Narayanaswamy, Janardhanan C; Basavaraju, Shrinivasa; Bose, Anushree; Mahavir Agarwal, Sri; Venkatasubramanian, Ganesan; Janardhan Reddy, Y C

    2018-04-01

    Error monitoring and response inhibition is a key cognitive deficit in obsessive-compulsive disorder (OCD). Frontal midline regions such as the cingulate cortex and pre-supplementary motor area are considered critical brain substrates of this deficit. Electrophysiological equivalent of the above dysfunction is a fronto-central event related potential (ERP) which occurs after an error called the error related negativity (ERN). In this study, we sought to compare the ERN parameters between medication-naïve, comorbidity-free subjects with OCD and healthy controls (HC). Age, sex and handedness matched subjects with medication-naïve, comorbidity-free OCD (N = 16) and Healthy Controls (N = 17) performed a modified version of the flanker task while EEG was acquired for ERN. EEG signals were recorded from the electrodes FCz and Cz. Clinical severity of OCD was assessed using the Yale Brown Obsessive Compulsive Scale. The subjects with OCD had significantly greater ERN amplitude at Cz and FCz. There were no significant correlations between ERN measures and illness severity measures. Overactive performance monitoring as evidenced by enhanced ERN amplitude could be considered as a biomarker for OCD. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  7. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  8. On the effect of numerical errors in large eddy simulations of turbulent flows

    International Nuclear Information System (INIS)

    Kravchenko, A.G.; Moin, P.

    1997-01-01

    Aliased and dealiased numerical simulations of a turbulent channel flow are performed using spectral and finite difference methods. Analytical and numerical studies show that aliasing errors are more destructive for spectral and high-order finite-difference calculations than for low-order finite-difference simulations. Numerical errors have different effects for different forms of the nonlinear terms in the Navier-Stokes equations. For divergence and convective forms, spectral methods are energy-conserving only if dealiasing is performed. For skew-symmetric and rotational forms, both spectral and finite-difference methods are energy-conserving even in the presence of aliasing errors. It is shown that discrepancies between the results of dealiased spectral and standard nondialiased finite-difference methods are due to both aliasing and truncation errors with the latter being the leading source of differences. The relative importance of aliasing and truncation errors as compared to subgrid scale model terms in large eddy simulations is analyzed and discussed. For low-order finite-difference simulations, truncation errors can exceed the magnitude of the subgrid scale term. 25 refs., 17 figs., 1 tab

  9. Systematic investigation of SLC final focus tolerances to errors

    International Nuclear Information System (INIS)

    Napoly, O.

    1996-10-01

    In this paper we review the tolerances of the SLC final focus system. To calculate these tolerances we used the error analysis routine of the program FFADA which has been written to aid the design and the analysis of final focus systems for the future linear colliders. This routine, complete by S. Fartoukh, systematically reviews the errors generated by the geometric 6-d Euclidean displacements of each magnet as well as by the field errors (normal and skew) up to the sextipolar order. It calculates their effects on the orbit and the transfer matrix at the second order in the errors, thus including cross-talk between errors originating from two different magnets. It also translates these effects in terms of tolerance derived from spot size growth and luminosity loss. We have run the routine for the following set of beam IP parameters: σ * x = 2.1 μm; σ * x' = 300 μrd; σ * x = 1 mm; σ * y = 0.55 μm; σ * y' = 200 μrd; σ * b = 2 x 10 -3 . The resulting errors and tolerances are displayed in a series of histograms which are reproduced in this paper. (author)

  10. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  11. Criticality criteria for submissions based on calculations

    International Nuclear Information System (INIS)

    Burgess, M.H.

    1975-06-01

    Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)

  12. (AJST) RELATIVE EFFICIENCY OF NON-PARAMETRIC ERROR ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    on 100 bootstrap samples, a sample of size n being taken with replacement in each initial sample of size n. .... the overlap (or optimal error rate) of the populations. However, the expression (2.3) for the computation of ..... Analysis and Machine Intelligence, 9, 628-633. Lachenbruch P. A. (1967). An almost unbiased method ...

  13. The experimental viscosity and calculated relative viscosity of liquid In-Sn allcoys

    International Nuclear Information System (INIS)

    Wu, A.Q.; Guo, L.J.; Liu, C.S.; Jia, E.G.; Zhu, Z.G.

    2007-01-01

    The experimental measured viscosity of liquid pure Sn, In 20 Sn 80 and In 80 Sn 20 alloys was studied, and to make a comparison, the calculated relative viscosity based on the pair distribution functions, g(r), has also been studied. There is one peak in each experimental viscosity and calculated relative-viscosity curve of liquid pure Sn about 1000 deg. C. One valley appears in each experimental viscosity and calculated viscosity curve of liquid In 20 Sn 80 alloy about 700 deg. C. There is no abnormal behavior on In 80 Sn 20 alloy. The behavior of experimental viscosity and calculated relative viscosity is coincident with each other. Those results conformed that the temperature-induced structure anomalies reported before did take place

  14. Method for evaluation of risk due to seismic related design and construction errors based on past reactor experience

    International Nuclear Information System (INIS)

    Gonzalez Cuesta, M.; Okrent, D.

    1985-01-01

    This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered

  15. Validation of Metrics as Error Predictors

    Science.gov (United States)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  16. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  17. Use of Balance Calibration Certificate to Calculate the Errors of Indication and Measurement Uncertainty in Mass Determinations Performed in Medical Laboratories

    Directory of Open Access Journals (Sweden)

    Adriana VÂLCU

    2011-09-01

    Full Text Available Based on the reference document, the article proposes the way to calculate the errors of indication and associated measurement uncertainties, by resorting to the general information provided by the calibration certificate of a balance (non-automatic weighing instruments, shortly NAWI used in medical field. The paper may be also considered a useful guideline for: operators working in laboratories accredited in medical (or other various fields where the weighing operations are part of their testing activities; test houses, laboratories, or manufacturers using calibrated non-automatic weighing instruments for measurements relevant for the quality of production subject to QM requirements (e.g. ISO 9000 series, ISO 10012, ISO/IEC 17025; bodies accrediting laboratories; accredited laboratories for the calibration of NAWI. Article refers only to electronic weighing instruments having maximum capacity up to 30 kg. Starting from the results provided by a calibration certificate it is presented an example of calculation.

  18. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  19. Iatrogenic medication errors in a paediatric intensive care unit in ...

    African Journals Online (AJOL)

    Errors most frequently encountered included failure to calculate rates of infusion and the conversion of mL to mEq or mL to mg for potassium, phenobarbitone and digoxin. Of the 117 children admitted, 111 (94.9%) were exposed to at least one medication error. Two or more medication errors occurred in 34.1% of cases.

  20. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    Science.gov (United States)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  1. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  2. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  3. Lower limb immobilization device induced small setup errors in the radiotherapy.

    Science.gov (United States)

    Lu, Yuting; Ni, Xinye; Yu, Jingping; Ni, Xinchu; Sun, Zhiqiang; Wang, Jianlin; Sun, Suping; Wang, Jian

    2018-04-01

    The aim of this study was to design a lower limb immobilization device and investigate its clinical application in the radiotherapy of the lower limbs.Around 38 patients who underwent lower limb radiotherapy using the designed immobilization device were included in this study. The setup errors were calculated by comparison of the portal images and the simulator films or digital reconstructed radiographs (DRRs).From all 38 patients accomplished the radiotherapy using this device, 178 anteroposterior portal images and 178 lateral portal images were used for the analysis of the positional accuracy. Significant differences were observed in the setup error of the head-foot direction compared with the left-right direction (t = 3.404, P = .002) and the anterior-posterior directions (t = 3.188, P = .003). No statistical differences were identified in the setup error in the left-right direction and anterior-posterior direction (t = 0.497, P = .622).The use of the in-house designed lower limb immobilization device allowed for relatively small setup errors. Furthermore, it showed satisfactory accuracy and repeatability.

  4. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    OpenAIRE

    Hoda Divsar; Robab Heydari

    2017-01-01

    The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learne...

  5. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  6. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  7. Comparing different error-conditions in film dosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2007-01-01

    In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error-conditions are recognised, such as errors that have an influence on the ability to make a correct evaluation of the dose value. In this project a comparison has been carried out to examine how two individual monitoring services, IMS [National Inst. of Radiation Hygiene (Denmark) (NIRH) and National Research Centre for Environment and Health (Germany) (GSF)], from two different EU countries mark their dosemeters. The IMS are different in size, type of customers and issuing period, but both use films as their primary dosemeters. The error-conditions examined are dosemeters exposed to moisture or light, contaminated dosemeters, films exposed outside the badge, missing filters in the badge, films inserted incorrectly in the badge and dosemeters not returned or returned too late to the IMS. The data are collected for the year 2003 where NIRH evaluated ∼50,000 and GSF ∼1.4 million film dosemeters. The percentage of film dosemeters is calculated for each error-condition as well as the distribution among eight different employee categories, i.e. medicine, nuclear medicine, nuclear industry, industry, radiography, laboratories, veterinary and others. It turned out, that incorrect insertion of the film in the badge was the most common error-condition observed at both IMS and that veterinarians, as the employee category, generally have the highest number of errors. NIRH has a significantly higher relative number of dosemeters in most error-conditions than GSF, which perhaps reflects that a comparison is difficult due to different systemic and methodical differences between the IMS and countries, e.g. regulations and monitoring programs etc. Also the non-existence of a common categorisation method for employee categories contributes to make a comparison like this difficult. (authors)

  8. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    Science.gov (United States)

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  9. Minimization of the effect of errors in approximate radiation view factors

    International Nuclear Information System (INIS)

    Clarksean, R.; Solbrig, C.

    1993-01-01

    The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors

  10. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  11. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  12. EEG-based decoding of error-related brain activity in a real-world driving task

    Science.gov (United States)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  13. Soft error evaluation in SRAM using α sources

    International Nuclear Information System (INIS)

    He Chaohui; Chu Jun; Ren Xueming; Xia Chunmei; Yang Xiupei; Zhang Weiwei; Wang Hongquan; Xiao Jiangbo; Li Xiaolin

    2006-01-01

    Soft errors in memories influence directly the reliability of products. To compare the ability of three different memories against soft errors by experiments of alpha particles irradiation, the numbers of soft errors are measured for three different SRAMs and the cross sections of single event upset (SEU) and failures in time (FIT) are calculated. According to the cross sections of SEU, the ability of A166M against soft errors is the best and then B166M, the last B200M. The average FIT of B166M is smaller than that of B200M, and that of A166M is the biggest among them. (authors)

  14. Diagnostic errors related to acute abdominal pain in the emergency department.

    Science.gov (United States)

    Medford-Davis, Laura; Park, Elizabeth; Shlamovitz, Gil; Suliburk, James; Meyer, Ashley N D; Singh, Hardeep

    2016-04-01

    Diagnostic errors in the emergency department (ED) are harmful and costly. We reviewed a selected high-risk cohort of patients presenting to the ED with abdominal pain to evaluate for possible diagnostic errors and associated process breakdowns. We conducted a retrospective chart review of ED patients >18 years at an urban academic hospital. A computerised 'trigger' algorithm identified patients possibly at high risk for diagnostic errors to facilitate selective record reviews. The trigger determined patients to be at high risk because they: (1) presented to the ED with abdominal pain, and were discharged home and (2) had a return ED visit within 10 days that led to a hospitalisation. Diagnostic errors were defined as missed opportunities to make a correct or timely diagnosis based on the evidence available during the first ED visit, regardless of patient harm, and included errors that involved both ED and non-ED providers. Errors were determined by two independent record reviewers followed by team consensus in cases of disagreement. Diagnostic errors occurred in 35 of 100 high-risk cases. Over two-thirds had breakdowns involving the patient-provider encounter (most commonly history-taking or ordering additional tests) and/or follow-up and tracking of diagnostic information (most commonly follow-up of abnormal test results). The most frequently missed diagnoses were gallbladder pathology (n=10) and urinary infections (n=5). Diagnostic process breakdowns in ED patients with abdominal pain most commonly involved history-taking, ordering insufficient tests in the patient-provider encounter and problems with follow-up of abnormal test results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  16. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  17. An estimate and evaluation of design error effects on nuclear power plant design adequacy

    International Nuclear Information System (INIS)

    Stevenson, J.D.

    1984-01-01

    An area of considerable concern in evaluating Design Control Quality Assurance procedures applied to design and analysis of nuclear power plant is the level of design error expected or encountered. There is very little published data 1 on the level of error typically found in nuclear power plant design calculations and even less on the impact such errors would be expected to have on overall design adequacy of the plant. This paper is concerned with design error associated with civil and mechanical structural design and analysis found in calculations which form part of the Design or Stress reports. These reports are meant to document the design basis and adequacy of the plant. The estimates contained in this paper are based on the personal experiences of the author. In Table 1 is a partial listing of the design docummentation review performed by the author on which the observations contained in this paper are based. In the preparation of any design calculations, it is a utopian dream to presume such calculations can be made error free. The intent of this paper is to define error levels which might be expected in a competent engineering organizations employing currently technically qualified engineers and accepted methods of Design Control. In addition, the effects of these errors on the probability of failure to meet applicable design code requirements also are estimated

  18. Resistive wall modes and error field amplification

    International Nuclear Information System (INIS)

    Boozer, Allen H.

    2003-01-01

    Resistive wall modes and the rapid damping of plasma rotation by the amplification of magnetic field errors are related physical phenomena that affect the performance of the advanced tokamak and spherical torus plasma confinement devices. Elements of our understanding of these phenomena and the code that is used to design the major experimental facilities are based on the electrical circuit representation of the response of the plasma to perturbations. Although the circuit representation of the plasma may seem heuristic, this representation can be rigorously obtained using Maxwell's equations and linearity for plasmas that evolve on a disparate time scale from that of external currents. These and related results are derived. In addition methods are given for finding the plasma information that the circuit representation requires using post-processors for codes that calculate perturbed plasma equilibria

  19. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  20. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  1. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    Science.gov (United States)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  2. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    Science.gov (United States)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  3. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  4. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    Science.gov (United States)

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  5. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Steers, Jennifer M. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 and Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095 (United States); Fraass, Benedick A., E-mail: benedick.fraass@cshs.org [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States)

    2016-04-15

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose

  6. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  7. Image defects from surface and alignment errors in grazing incidence telescopes

    Science.gov (United States)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  8. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  9. On nonstationarity-related errors in modal combination rules of the response spectrum method

    Science.gov (United States)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  10. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  11. A Novel TRM Calculation Method by Probabilistic Concept

    Science.gov (United States)

    Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki

    In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.

  12. Aniseikonia quantification: error rate of rule of thumb estimation.

    Science.gov (United States)

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  13. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  14. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  15. Refractive error, ocular biometry, and lens opalescence in an adult population: the Los Angeles Latino Eye Study.

    Science.gov (United States)

    Shufelt, Chrisandra; Fraser-Bell, Samantha; Ying-Lai, Mei; Torres, Mina; Varma, Rohit

    2005-12-01

    To characterize age- and gender-related differences in refractive error, ocular biometry, and lens opalescence (NOP) in a population-based sample of adult Latinos. Also assessed were the determinants of age-related refractive differences. Participants in the Los Angeles Latino Eye Study (LALES), a population-based study of Latinos aged 40 years and more, underwent an ophthalmic examination, including ultrasonic measurements of axial length (AL), vitreous chamber depth (VCD), anterior chamber depth (ACD), lens thickness (LT), and noncycloplegic automated and subjective refraction. Corneal curvature/power (CP) was measured using an autorefractor. NOP was graded at the slit lamp by an ophthalmologist using the Lens Opacity Classification System II. Age- and gender-related differences were calculated. Multiple regression models were used to identify the determinants of age-related refractive differences. Of the 6357 LALES participants, 5588 phakic individuals with biometric data were included in this analysis. Older individuals had shallower ACDs, thicker lenses, more NOP, and more hyperopia compared to younger individuals (P or = 0.05). Women had significantly shorter AL, shallower ACD and VCD, than did men (P < or = 0.01). The strongest determinants of refractive error were AL (primarily VCD) and CP. NOP was a small but significant determinant of refractive error in older individuals. Age- and gender-related differences in ocular biometric, refractive error, and NOP measurements are present in adult Latinos. While the relative contribution of NOP in determining refractive error is small, it is greater in older persons compared to younger individuals.

  16. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  17. Substep methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Bateman solution based depletion requires constant microscopic reaction rates. → Traditionally constant approximation is used for each depletion step. → Here depletion steps are divided to substeps which are solved sequentially. → This allows piecewise constant, rather than constant, approximation for each step. → Discretization errors are almost completely removed with only minor slowdown. - Abstract: When material changes in burnup calculations are solved by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates, one has to first predict the development of the reaction rates during the step and then further approximate these predictions with their averages in the depletion calculation. Representing the continuously changing reaction rates with their averages results in some error regardless of how accurately their development was predicted. Since neutronics solutions tend to be computationally expensive, steps in typical calculations are long and the resulting discretization errors significant. In this paper we present a simple solution to reducing these errors: the depletion steps are divided to substeps that are solved sequentially, allowing finer discretization of the reaction rates without additional neutronics solutions. This greatly reduces the discretization errors and, at least when combined with Monte Carlo neutronics, causes only minor slowdown as neutronics dominates the total running time.

  18. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  19. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  20. CO2 production in animals: analysis of potential errors in the doubly labeled water method

    International Nuclear Information System (INIS)

    Nagy, K.A.

    1979-03-01

    Laboratory validation studies indicate that doubly labeled water ( 3 HH 18 O and 2 HH 18 O) measurements of CO 2 production are accurate to within +-9% in nine species of mammals and reptiles, a bird, and an insect. However, in field studies, errors can be much larger under certain circumstances. Isotopic fraction of labeled water can cause large errors in animals whose evaporative water loss comprises a major proportion of total water efflux. Input of CO 2 across lungs and skin caused errors exceeding +80% in kangaroo rats exposed to air containing 3.4% unlabeled CO 2 . Analytical errors of +-1% in isotope concentrations can cause calculated rates of CO 2 production to contain errors exceeding +-70% in some circumstances. These occur: 1) when little decline in isotope concentractions has occured during the measurement period; 2) when final isotope concentrations closely approach background levels; and 3) when the rate of water flux in an animal is high relative to its rate of CO 2 production. The following sources of error are probably negligible in most situations: 1) use of an inappropriate equation for calculating CO 2 production, 2) variations in rates of water or CO 2 flux through time, 3) use of H 2 O-18 dilution space as a measure of body water volume, 4) exchange of 0-18 between water and nonaqueous compounds in animals (including excrement), 5) incomplete mixing of isotopes in the animal, and 6) input of unlabeled water via lungs and skin. Errors in field measurements of CO 2 production can be reduced to acceptable levels (< 10%) by appropriate selection of study subjects and recapture intervals

  1. Task engagement and the relationships between the error-related negativity, agreeableness, behavioral shame proneness and cortisol

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.; Wester, Anne E.; Lorist, Monicque M.; Meijman, Theo F.

    Previous results suggest that both cortisol. mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels

  2. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  3. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy ...

    Indian Academy of Sciences (India)

    2007), we find that. 3.796% of the data is an outline of 2.6σ based on the average total observational error of the distance modulus of SNIa, 0.31 m . Obviously, the distance modulus error deviates Gaussian distribution seriously, and it is not suitable to calculate the system- atic error σsys of SNIa by the χ2 check test method.

  4. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  5. An Analysis on the Characteristic of Multi-response CADIS Method for the Monte Carlo Radiation Shielding Calculation

    International Nuclear Information System (INIS)

    Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun

    2014-01-01

    It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method

  6. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  7. Performance of a glucose meter with a built-in automated bolus calculator versus manual bolus calculation in insulin-using subjects.

    Science.gov (United States)

    Sussman, Allen; Taylor, Elizabeth J; Patel, Mona; Ward, Jeanne; Alva, Shridhara; Lawrence, Andrew; Ng, Ronald

    2012-03-01

    Patients consider multiple parameters in adjusting prandial insulin doses for optimal glycemic control. Difficulties in calculations can lead to incorrect doses or induce patients to administer fixed doses, rely on empirical estimates, or skip boluses. A multicenter study was conducted with 205 diabetes subjects who were on multiple daily injections of rapid/ short-acting insulin. Using the formula provided, the subjects manually calculated two prandial insulin doses based on one high and one normal glucose test result, respectively. They also determined the two doses using the FreeStyle InsuLinx Blood Glucose Monitoring System, which has a built-in, automated bolus calculator. After dose determinations, the subjects completed opinion surveys. Of the 409 insulin doses manually calculated by the subjects, 256 (63%) were incorrect. Only 23 (6%) of the same 409 dose determinations were incorrect using the meter, and these errors were due to either confirmed or potential deviations from the study instructions by the subjects when determining dose with meter. In the survey, 83% of the subjects expressed more confidence in the meter-calculated doses than the manually calculated doses. Furthermore, 87% of the subjects preferred to use the meter than manual calculation to determine prandial insulin doses. Insulin-using patients made errors in more than half of the manually calculated insulin doses. Use of the automated bolus calculator in the FreeStyle InsuLinx meter minimized errors in dose determination. The patients also expressed confidence and preference for using the meter. This may increase adherence and help optimize the use of mealtime insulin. © 2012 Diabetes Technology Society.

  8. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    Directory of Open Access Journals (Sweden)

    Greg A. Breed

    2015-08-01

    Full Text Available Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm, this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  9. MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis

    Science.gov (United States)

    McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.

    2010-01-01

    Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.

  10. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  11. Optical losses due to tracking error estimation for a low concentrating solar collector

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón

    2015-01-01

    Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water

  12. 29Si NMR Chemical Shift Calculation for Silicate Species by Gaussian Software

    Science.gov (United States)

    Azizi, S. N.; Rostami, A. A.; Godarzian, A.

    2005-05-01

    Hartree-Fock self-consistent-field (HF-SCF) theory and the Gauge-including atomic orbital (GIAO) methods are used in the calculation of 29Si NMR chemical shifts for ABOUT 90 units of 19 compounds of various silicate species of precursors for zeolites. Calculations have been performed at geometries optimized at the AM1 semi-empirical method. The GIAO-HF-SCF calculations were carried out with using three different basis sets: 6-31G*, 6-31+G** and 6-311+G(2d,p). To demonstrate the quality of the calculations the calculated chemical shifts, δ, were compared with the corresponding experimental values for the compounds in study. The results, especially with 6-31+g** are in excellent agreement with experimental values. The calculated chemical shifts, in practical point of view, appear to be accurate enough to aid in experimental peak assignments. The difference between the experimental and calculated 29Si chemical shift values not only depends on the Qn units but also it seems that basis set effects and the level of theory is more important. For the series of molecules studied here, the standard deviations and mean absolute errors for 29Si chemical shifts relative to TMS determined using Hartree--Fock 6-31+G** basis is nearly in all cases smaller than the errors for shifts determined using HF/6-311+G(2d,p).

  13. Performance monitoring in the anterior cingulate is not all error related: expectancy deviation and the representation of action-outcome associations.

    Science.gov (United States)

    Oliveira, Flavio T P; McDonald, John J; Goodman, David

    2007-12-01

    Several converging lines of evidence suggest that the anterior cingulate cortex (ACC) is selectively involved in error detection or evaluation of poor performance. Here we challenge this notion by presenting event-related potential (ERP) evidence that the feedback-elicited error-related negativity, an ERP component attributed to the ACC, can be elicited by positive feedback when a person is expecting negative feedback and vice versa. These results suggest that performance monitoring in the ACC is not limited to error processing. We propose that the ACC acts as part of a more general performance-monitoring system that is activated by violations in expectancy. Further, we propose that the common observation of increased ACC activity elicited by negative events could be explained by an overoptimistic bias in generating expectations of performance. These results could shed light into neurobehavioral disorders, such as depression and mania, associated with alterations in performance monitoring and also in judgments of self-related events.

  14. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  15. Analyse des erreurs dans les calculs sur ordinateurs Error Analysis in Computing

    Directory of Open Access Journals (Sweden)

    Vignes J.

    2006-11-01

    Full Text Available La méthode présentée ici permet d'évaluer l'erreur sur les résultats d'algorithmes, erreurs dues à l'arithmétique à précision limitée de la machines L'idée de base de cette méthode est qu'à un algorithme donné fournissant un résultat algébrique unique r, correspond en informatique un ensemble R de résultats numériques qui sont tous représentatifs de résultat exact r. La méthode de permutation-perturbation que nous présentons ici permet d'obtenir les éléments de R. La perturbation agit sur les données et résultats de chaque opération élémentaire. La permutation agit sur l'ordre d'exécution des opérations. Une étude statistique des éléments de R permet d'estimer l'erreur commise. Dans la pratique, il suffit de 2 ou 3 éléments de R pour connaître cette erreur. This paper describes a new method for evaluating the error in the results of computation of an algorithm. The basic idea underlying the method is that while in algebra a given algorithm provides a single result r, this same algorithm carried out on a computer provides a set R of numerical results that are ail representative of the exact algebraic result r. The permutation-perturbation method described here can be used to obtain the elements of R. The perturbation acts on the data and results of each elementary operation, and the permutation acts on the order in which operations are carried out. A statistical analysis of the elements of R is performed to determine the error committed. In practice, 2 to 4 R elements are sufficient for determining the error.

  16. Air and smear sample calculational tool for Fluor Hanford Radiological control

    International Nuclear Information System (INIS)

    BAUMANN, B.L.

    2003-01-01

    A spreadsheet calculation tool was developed to automate the calculations performed for determining the concentration of airborne radioactivity and smear counting as outlined in HNF--13536, Section 5.2.7, ''Analyzing Air and Smear Samples''. This document reports on the design and testing of the calculation tool. Radiological Control Technicians (RCTs) will save time and reduce hand written and calculation errors by using an electronic form for documenting and calculating work place air samples. Current expectations are RCTs will perform an air sample and collect the filter or perform a smear for surface contamination. RCTs will then survey the filter for gross alpha and beta/gamma radioactivity and with the gross counts utilize either hand calculation method or a calculator to determine activity on the filter. The electronic form will allow the RCT with a few key strokes to document the individual's name, payroll, gross counts, instrument identifiers; produce an error free record. This productivity gain is realized by the enhanced ability to perform mathematical calculations electronically (reducing errors) and at the same time, documenting the air sample

  17. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  18. Quantification of human errors in level-1 PSA studies in NUPEC/JINS

    International Nuclear Information System (INIS)

    Hirano, M.; Hirose, M.; Sugawara, M.; Hashiba, T.

    1991-01-01

    THERP (Technique for Human Error Rate Prediction) method is mainly adopted to evaluate the pre-accident and post-accident human error rates. Performance shaping factors are derived by taking Japanese operational practice into account. Several examples of human error rates with calculational procedures are presented. The important human interventions of typical Japanese NPPs are also presented. (orig./HP)

  19. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  20. Algorithm for Calculating the Dissociation Constants of Ampholytes in Nonbuffer Systems

    Science.gov (United States)

    Lysova, S. S.; Skripnikova, T. A.; Zevatskii, Yu. E.

    2018-05-01

    An algorithm for calculating the dissociation constants of ampholytes in aqueous solutions is developed on the basis of spectrophotometric data in the UV and visible ranges without pH measurements of a medium and without buffer solutions. The proposed algorithm has been experimentally tested for five ampholytes of different strengths. The relative error of measuring dissociation constants is less than 5%.

  1. Can medical students calculate drug doses? | Harries | Southern ...

    African Journals Online (AJOL)

    ... with calculations when the drug concentration was expressed either as a ratio or a percentage. Conclusion: Our findings support calls for the standardised labelling of drugs in solution and for dosage calculation training in the medical curriculum. Keywords: drug dosage calculations, clinical competence, medication errors

  2. Changes of the calculation equation for σMUF

    International Nuclear Information System (INIS)

    Yoshida, Hideki; Niiyama, Toshitaka; Sonobe, Kentaro

    2002-01-01

    The error variance (σ MUF 2 ) of the material accountancy for the material balance is used for evaluating the MUF of the conventional material accountancy and the near real time material accountancy (NRTA). The σ MUF 2 calculated by the error propagation using the material accounting data and the measurement error. The error propagation equation of σ MUF 2 written on the text of 'The statistical concepts and technique for IAEA safeguards (IAEA/SG/SCT5)'. There are some assumptions in order to simplify the equation. These assumptions are available in the assessment of the facility design. However when the σ MUF 2 of the actual MUF is calculated, it is necessary to drop some assumptions and modify the adapted equation. Furthermore, because the material balance is more frequently taken for NRTA, the inventory of all times cannot be always re-measured at each time. To be solved the matter, the error propagation equation has to be modified. For a reprocessing plant which has material in solution, the equation has been improved to obtain more exact equation. In this paper we present the changes of the error propagation for σ MUF 2 and explain the features. (author)

  3. Determination of global positioning system (GPS) receiver clock errors: impact on positioning accuracy

    International Nuclear Information System (INIS)

    Yeh, Ta-Kang; Hwang, Cheinway; Xu, Guochang; Wang, Chuan-Sheng; Lee, Chien-Chih

    2009-01-01

    Enhancing the positioning precision is the primary pursuit of global positioning system (GPS) users. To achieve this goal, most studies have focused on the relationship between GPS receiver clock errors and GPS positioning precision. This study utilizes undifferentiated phase data to calculate GPS clock errors and to compare with the frequency of cesium clock directly, to verify estimated clock errors by the method used in this paper. The frequency stability calculated from this paper (the indirect method) and measured from the National Standard Time and Frequency Laboratory (NSTFL) of Taiwan (the direct method) match to 1.5 × 10 −12 (the value from this study was smaller than that from NSTFL), suggesting that the proposed technique has reached a certain level of quality. The built-in quartz clocks in the GPS receivers yield relative frequency offsets that are 3–4 orders higher than those of rubidium clocks. The frequency stability of the quartz clocks is on average two orders worse than that of the rubidium clock. Using the rubidium clock instead of the quartz clock, the horizontal and vertical positioning accuracies were improved by 26–78% (0.6–3.6 mm) and 20–34% (1.3–3.0 mm), respectively, for a short baseline. These improvements are 7–25% (0.3–1.7 mm) and 11% (1.7 mm) for a long baseline. Our experiments show that the frequency stability of the clock, rather than relative frequency offset, is the governing factor of positioning accuracy

  4. Exact error estimation for solutions of nuclide chain equations

    International Nuclear Information System (INIS)

    Tachihara, Hidekazu; Sekimoto, Hiroshi

    1999-01-01

    The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)

  5. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  6. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  7. Bayesian error estimation in density-functional theory

    DEFF Research Database (Denmark)

    Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund

    2005-01-01

    We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...

  8. Patient safety incident reports related to traditional Japanese Kampo medicines: medication errors and adverse drug events in a university hospital for a ten-year period.

    Science.gov (United States)

    Shimada, Yutaka; Fujimoto, Makoto; Nogami, Tatsuya; Watari, Hidetoshi; Kitahara, Hideyuki; Misawa, Hiroki; Kimbara, Yoshiyuki

    2017-12-21

    Kampo medicine is traditional Japanese medicine, which originated in ancient traditional Chinese medicine, but was introduced and developed uniquely in Japan. Today, Kampo medicines are integrated into the Japanese national health care system. Incident reporting systems are currently being widely used to collect information about patient safety incidents that occur in hospitals. However, no investigations have been conducted regarding patient safety incident reports related to Kampo medicines. The aim of this study was to survey and analyse incident reports related to Kampo medicines in a Japanese university hospital to improve future patient safety. We selected incident reports related to Kampo medicines filed in Toyama University Hospital from May 2007 to April 2017, and investigated them in terms of medication errors and adverse drug events. Out of 21,324 total incident reports filed in the 10-year survey period, we discovered 108 Kampo medicine-related incident reports. However, five cases were redundantly reported; thus, the number of actual incidents was 103. Of those, 99 incidents were classified as medication errors (77 administration errors, 15 dispensing errors, and 7 prescribing errors), and four were adverse drug events, namely Kampo medicine-induced interstitial pneumonia. The Kampo medicine (crude drug) that was thought to induce interstitial pneumonia in all four cases was Scutellariae Radix, which is consistent with past reports. According to the incident severity classification system recommended by the National University Hospital Council of Japan, of the 99 medication errors, 10 incidents were classified as level 0 (an error occurred, but the patient was not affected) and 89 incidents were level 1 (an error occurred that affected the patient, but did not cause harm). Of the four adverse drug events, two incidents were classified as level 2 (patient was transiently harmed, but required no treatment), and two incidents were level 3b (patient was

  9. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  10. The relative impact of sizing errors on steam generator tube failure probability

    International Nuclear Information System (INIS)

    Cizelj, L.; Dvorsek, T.

    1998-01-01

    The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)

  11. An adjoint-based scheme for eigenvalue error improvement

    International Nuclear Information System (INIS)

    Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.

    2011-01-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  12. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  13. Systematic errors in VLF direction-finding of whistler ducts

    International Nuclear Information System (INIS)

    Strangeways, H.J.; Rycroft, M.J.

    1980-01-01

    In the previous paper it was shown that the systematic error in the azimuthal bearing due to multipath propagation and incident wave polarisation (when this also constitutes an error) was given by only three different forms for all VLF direction-finders currently used to investigate the position of whistler ducts. In this paper the magnitude of this error is investigated for different ionospheric and ground parameters for these three different systematic error types. By incorporating an ionosphere for which the refractive index is given by the full Appleton-Hartree formula, the variation of the systematic error with ionospheric electron density and latitude and direction of propagation is investigated in addition to the variation with wave frequency, ground conductivity and dielectric constant and distance of propagation. The systematic bearing error is also investigated for the three methods when the azimuthal bearing is averaged over a 2 kHz bandwidth. This is found to lead to a significantly smaller bearing error which, for the crossed-loops goniometer, approximates the bearing error calculated when phase-dependent terms in the receiver response are ignored. (author)

  14. Prediction of human errors by maladaptive changes in event-related brain networks

    NARCIS (Netherlands)

    Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we

  15. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. [Study on spectrum analysis of X-ray based on rotational mass effect in special relativity].

    Science.gov (United States)

    Yu, Zhi-Qiang; Xie, Quan; Xiao, Qing-Quan

    2010-04-01

    Based on special relativity, the formation mechanism of characteristic X-ray has been studied, and the influence of rotational mass effect on X-ray spectrum has been given. A calculation formula of the X-ray wavelength based upon special relativity was derived. Error analysis was carried out systematically for the calculation values of characteristic wavelength, and the rules of relative error were obtained. It is shown that the values of the calculation are very close to the experimental values, and the effect of rotational mass effect on the characteristic wavelength becomes more evident as the atomic number increases. The result of the study has some reference meaning for the spectrum analysis of characteristic X-ray in application.

  17. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance

    International Nuclear Information System (INIS)

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-01-01

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s -1 ) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s -1 ) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s -1 ) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm -1 of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will

  18. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance.

    Science.gov (United States)

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-12-07

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s(-1)) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s(-1)) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s(-1)) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm(-1) of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will

  19. Validation of Calculations in a Digital Thermometer Firmware

    Science.gov (United States)

    Batagelj, V.; Miklavec, A.; Bojkovski, J.

    2014-04-01

    State-of-the-art digital thermometers are arguably remarkable measurement instruments, measuring outputs from resistance thermometers and/or thermocouples. Not only that they can readily achieve measuring accuracies in the parts-per-million range, but they also incorporate sophisticated algorithms for the transformation calculation of the measured resistance or voltage to temperature. These algorithms often include high-order polynomials, exponentials and logarithms, and must be performed using both standard coefficients and particular calibration coefficients. The numerical accuracy of these calculations and the associated uncertainty component must be much better than the accuracy of the raw measurement in order to be negligible in the total measurement uncertainty. In order for the end-user to gain confidence in these calculations as well as to conform to formal requirements of ISO/IEC 17025 and other standards, a way of validation of these numerical procedures performed in the firmware of the instrument is required. A software architecture which allows a simple validation of internal measuring instrument calculations is suggested. The digital thermometer should be able to expose all its internal calculation functions to the communication interface, so the end-user can compare the results of the internal measuring instrument calculation with reference results. The method can be regarded as a variation of the black-box software validation. Validation results on a thermometer prototype with implemented validation ability show that the calculation error of basic arithmetic operations is within the expected rounding error. For conversion functions, the calculation error is at least ten times smaller than the thermometer effective resolution for the particular probe type.

  20. Longitudinal Changes in Young Children’s 0-100 to 0-1000 Number-Line Error Signatures

    Directory of Open Access Journals (Sweden)

    Robert A. Reeve

    2015-05-01

    Full Text Available We use a latent difference score (LDS model to examine changes in young children’s number-line (NL error signatures (errors marking numbers on a NL over 18 months. A LDS model (1 overcomes some of the inference limitations of analytic models used in previous research, and in particular (2 provides a more reliable test of hypotheses about the meaning and significance of changes in NL error signatures over time and task. The NL error signatures of 217 6-year-olds’ (on test occasion one were assessed three times over 18 months, along with their math ability on two occasions. On the first occasion (T1 children completed a 0–100 NL task; on the second (T2 a 0–100 NL and a 0–1000 NL task; on the third (T3 occasion a 0–1000 NL task. On the third and fourth occasions (T3 and T4, children completed mental calculation tasks. Although NL error signatures changed over time, these were predictable from other NL task error signatures, and predicted calculation accuracy at T3, as well as changes in calculation between T3 and T4. Multiple indirect effects (change parameters showed that associations between initial NL error signatures (0–100 NL and later mental calculation ability were mediated by error signatures on the 0–1000 NL task. The pattern of findings from the LDS model highlight the value of identifying direct and indirect effects in characterizing changing relationships in cognitive representations over task and time. Substantively, they support the claim that children’s NL error signatures generalize over task and time and thus can be used to predict math ability.

  1. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  2. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  3. The effects of field errors on low-gain free-electron lasers

    International Nuclear Information System (INIS)

    Esarey, E.; Tang, C.M.; Marable, W.P.

    1991-01-01

    This paper reports on the effects of random wiggler magnetic field errors on low-gain free-electron lasers that are examined analytically and numerically through the use of ensemble averaging techniques. Wiggler field errors perturb the electron beam as it propagates and lead to a random walk of the beam centroid δx, variations in the axial beam energy δ γz and deviations in the relative phase of the electrons in the ponderomotive wave δψ. In principle, the random walk may be kept as small as desired through the use of transverse focusing and beam steering. Transverse focusing of the electron beam is shown to be ineffective in reducing the phase deviation. Furthermore, it is shown that beam steering at the wiggler entrance reduces the average phase deviation at the end of the wiggler by 1/3. The effect of the field errors (via the phase deviation) on the gain in the low-gain regime is calculated. To avoid significant reduction in gain it is necessary for the phase deviation to be small compared to 2π. The detrimental effects of wiggler errors on low-gain free-electron lasers may be reduced by arranging the magnet poles in an optimal ordering such that the magnitude of the phase deviation is minimized

  4. Excited state electron affinity calculations for aluminum

    Science.gov (United States)

    Hussein, Adnan Yousif

    2017-08-01

    Excited states of negative aluminum ion are reviewed, and calculations of electron affinities of the states (3s^23p^2)^1D and (3s3p^3){^5}{S}° relative to the (3s^23p)^2P° and (3s3p^2)^4P respectively of the neutral aluminum atom are reported in the framework of nonrelativistic configuration interaction (CI) method. A priori selected CI (SCI) with truncation energy error (Bunge in J Chem Phys 125:014107, 2006) and CI by parts (Bunge and Carbó-Dorca in J Chem Phys 125:014108, 2006) are used to approximate the valence nonrelativistic energy. Systematic studies of convergence of electron affinity with respect to the CI excitation level are reported. The calculated value of the electron affinity for ^1D state is 78.675(3) meV. Detailed Calculations on the ^5S°c state reveals that is 1216.8166(3) meV below the ^4P state.

  5. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  6. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  7. Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients

    Science.gov (United States)

    Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations

    2018-04-01

    We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.

  8. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  9. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  10. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  11. Subroutine library for error estimation of matrix computation (Ver. 1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi; Shizawa, Yoshihisa; Kishida, Norio

    1999-03-01

    'Subroutine Library for Error Estimation of Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the linear system's solutions or the Hermitian matrices' eigenvalues. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calculate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The subroutines for error estimation of Hermitian matrix eigenvalues derive the error ranges of the eigenvalues according to the Korn-Kato's formula. The test matrix generators supply the matrices appeared in the mathematical research, the ones randomly generated and the ones appeared in the application programs. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  12. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  13. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Science.gov (United States)

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  14. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  15. Evaluation of DNBR calculation methods for advanced digital core protection system

    International Nuclear Information System (INIS)

    Ihn, W. K.; Hwang, D. H.; Pak, Y. H.; Yoon, T. Y.

    2003-01-01

    This study evaluated the on-line DNBR calculation methods for an advanced digital core protection system in PWR, i.e., subchannel analysis and group-channel analysis. The subchannel code MATRA and the four-channel codes CETOP-D and CETOP2 were used here. CETOP2 is most simplified DNBR analysis code which is implemented in core protection calculator in Korea standard nuclear power plants. The detailed subchannel code TORC was used as a reference calculation of DNBR. The DNBR uncertainty and margin were compared using allowable operating conditions at Yonggwang nuclear units 3-4. The MATRA code using a nine lumping-channel model resulted in smaller mean and larger standard deviation of the DNBR error distribution. CETOP-D and CETOP2 showed conservatively biased mean and relatively smaller standard deviation of the DNBR error distribution. MATRA and CETOP-D w.r.t CETOP2 showed significant increase of the DNBR available margin at normal operating condition. Taking account for the DNBR uncertainty, MATRA and CETOP-D over CETOP2 were estimated to increase the DNBR net margin by 2.5%-9.8% and 2.5%-3.3%, respectively

  16. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  17. Failures without errors: quantification of context in HRA

    International Nuclear Information System (INIS)

    Fujita, Yushi; Hollnagel, Erik

    2004-01-01

    PSA-cum-human reliability analysis (HRA) has traditionally used individual human actions, hence individual 'human errors', as a meaningful unit of analysis. This is inconsistent with the current understanding of accidents, which points out that the notion of 'human error' is ill defined and that adverse events more often are the due to the working conditions than to people. Several HRA approaches, such as ATHEANA and CREAM have recognised this conflict and proposed ways to deal with it. This paper describes an improvement of the basic screening method in CREAM, whereby a rating of the performance conditions can be used to calculate a Mean Failure Rate directly without invoking the notion of human error

  18. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  19. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  20. Invariance and variability in interaction error-related potentials and their consequences for classification

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Kapeller, Christoph; Hintermüller, Christoph; Guger, Christoph; Peer, Angelika

    2017-12-01

    Objective. This paper discusses the invariance and variability in interaction error-related potentials (ErrPs), where a special focus is laid upon the factors of (1) the human mental processing required to assess interface actions (2) time (3) subjects. Approach. Three different experiments were designed as to vary primarily with respect to the mental processes that are necessary to assess whether an interface error has occurred or not. The three experiments were carried out with 11 subjects in a repeated-measures experimental design. To study the effect of time, a subset of the recruited subjects additionally performed the same experiments on different days. Main results. The ErrP variability across the different experiments for the same subjects was found largely attributable to the different mental processing required to assess interface actions. Nonetheless, we found that interaction ErrPs are empirically invariant over time (for the same subject and same interface) and to a lesser extent across subjects (for the same interface). Significance. The obtained results may be used to explain across-study variability of ErrPs, as well as to define guidelines for approaches to the ErrP classifier transferability problem.

  1. SOLGAS refined: A computerized thermodynamic equilibrium calculation tool

    International Nuclear Information System (INIS)

    Trowbridge, L.D.; Leitnaker, J.M.

    1993-11-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several open-quote bells and whistlesclose quotes have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised format for entering data simplifies and reduces chances for error. Calculated errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed open-quotes on line.close-quote The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatible with at least 384 bytes of low RAM, are available from the authors

  2. Non-invasive mapping of calculation function by repetitive navigated transcranial magnetic stimulation.

    Science.gov (United States)

    Maurer, Stefanie; Tanigawa, Noriko; Sollmann, Nico; Hauck, Theresa; Ille, Sebastian; Boeckh-Behrens, Tobias; Meyer, Bernhard; Krieg, Sandro M

    2016-11-01

    Concerning calculation function, studies have already reported on localizing computational function in patients and volunteers by functional magnetic resonance imaging and transcranial magnetic stimulation. However, the development of accurate repetitive navigated TMS (rTMS) with a considerably higher spatial resolution opens a new field in cognitive neuroscience. This study was therefore designed to evaluate the feasibility of rTMS for locating cortical calculation function in healthy volunteers, and to establish this technique for future scientific applications as well as preoperative mapping in brain tumor patients. Twenty healthy subjects underwent rTMS calculation mapping using 5 Hz/10 pulses. Fifty-two previously determined cortical spots of the whole hemispheres were stimulated on both sides. The subjects were instructed to perform the calculation task composed of 80 simple arithmetic operations while rTMS pulses were applied. The highest error rate (80 %) for all errors of all subjects was observed in the right ventral precentral gyrus. Concerning division task, a 45 % error rate was achieved in the left middle frontal gyrus. The subtraction task showed its highest error rate (40 %) in the right angular gyrus (anG). In the addition task a 35 % error rate was observed in the left anterior superior temporal gyrus. Lastly, the multiplication task induced a maximum error rate of 30 % in the left anG. rTMS seems feasible as a way to locate cortical calculation function. Besides language function, the cortical localizations are well in accordance with the current literature for other modalities or lesion studies.

  3. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  4. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  5. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won [Dept. of Radiation Oncology, , Seoul (Korea, Republic of)

    2012-03-15

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2{+-}1.0% and errors of AAA have showned 3.5{+-}2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5{+-}2.8% before the application has been decreased within 0.4{+-}2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  6. Examinations on Applications of Manual Calculation Programs on Lung Cancer Radiation Therapy Using Analytical Anisotropic Algorithm

    International Nuclear Information System (INIS)

    Kim, Jung Min; Kim, Dae Sup; Hong, Dong Ki; Back, Geum Mun; Kwak, Jung Won

    2012-01-01

    There was a problem with using MU verification programs for the reasons that there were errors of MU when using MU verification programs based on Pencil Beam Convolution (PBC) Algorithm with radiation treatment plans around lung using Analytical Anisotropic Algorithm (AAA). On this study, we studied the methods that can verify the calculated treatment plans using AAA. Using Eclipse treatment planning system (Version 8.9, Varian, USA), for each 57 fields of 7 cases of Lung Stereotactic Body Radiation Therapy (SBRT), we have calculated using PBC and AAA with dose calculation algorithm. By developing MU of established plans, we compared and analyzed with MU of manual calculation programs. We have analyzed relationship between errors and 4 variables such as field size, lung path distance of radiation, Tumor path distance of radiation, effective depth that can affect on errors created from PBC algorithm and AAA using commonly used programs. Errors of PBC algorithm have showned 0.2±1.0% and errors of AAA have showned 3.5±2.8%. Moreover, as a result of analyzing 4 variables that can affect on errors, relationship in errors between lung path distance and MU, connection coefficient 0.648 (P=0.000) has been increased and we could calculate MU correction factor that is A.E=L.P 0.00903+0.02048 and as a result of replying for manual calculation program, errors of 3.5±2.8% before the application has been decreased within 0.4±2.0%. On this study, we have learned that errors from manual calculation program have been increased as lung path distance of radiation increases and we could verified MU of AAA with a simple method that is called MU correction factor.

  7. Practical Insights from Initial Studies Related to Human Error Analysis Project (HEAP)

    International Nuclear Information System (INIS)

    Follesoe, Knut; Kaarstad, Magnhild; Droeivoldsmo, Asgeir; Hollnagel, Erik; Kirwan; Barry

    1996-01-01

    This report presents practical insights made from an analysis of the three initial studies in the Human Error Analysis Project (HEAP), and the first study in the US NRC Staffing Project. These practical insights relate to our understanding of diagnosis in Nuclear Power Plant (NPP) emergency scenarios and, in particular, the factors that influence whether a diagnosis will succeed or fail. The insights reported here focus on three inter-related areas: (1) the diagnostic strategies and styles that have been observed in single operator and team-based studies; (2) the qualitative aspects of the key operator support systems, namely VDU interfaces, alarms, training and procedures, that have affected the outcome of diagnosis; and (3) the overall success rates of diagnosis and the error types that have been observed in the various studies. With respect to diagnosis, certain patterns have emerged from the various studies, depending on whether operators were alone or in teams, and on their familiarity with the process. Some aspects of the interface and alarm systems were found to contribute to diagnostic failures while others supported performance and recovery. Similar results were found for training and experience. Furthermore, the availability of procedures did not preclude the need for some diagnosis. With respect to HRA and PSA, it was possible to record the failure types seen in the studies, and in some cases to give crude estimates of the failure likelihood for certain scenarios. Although these insights are interim in nature, they do show the type of information that can be derived from these studies. More importantly, they clarify aspects of our understanding of diagnosis in NPP emergencies, including implications for risk assessment, operator support systems development, and for research into diagnosis in a broader range of fields than the nuclear power industry. (author)

  8. Error estimation for CFD aeroheating prediction under rarefied flow condition

    Science.gov (United States)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  9. Assessment of the pseudo-tracking approach for the calculation of material acceleration and pressure fields from time-resolved PIV: part I. Error propagation

    Science.gov (United States)

    van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.

    2018-04-01

    Pseudo-tracking refers to the construction of imaginary particle paths from PIV velocity fields and the subsequent estimation of the particle (material) acceleration. In view of the variety of existing and possible alternative ways to perform the pseudo-tracking method, it is not straightforward to select a suitable combination of numerical procedures for its implementation. To address this situation, this paper extends the theoretical framework for the approach. The developed theory is verified by applying various implementations of pseudo-tracking to a simulated PIV experiment. The findings of the investigations allow us to formulate the following insights and practical recommendations: (1) the velocity errors along the imaginary particle track are primarily a function of velocity measurement errors and spatial velocity gradients; (2) the particle path may best be calculated with second-order accurate numerical procedures while ensuring that the CFL condition is met; (3) least-square fitting of a first-order polynomial is a suitable method to estimate the material acceleration from the track; and (4) a suitable track length may be selected on the basis of the variation in material acceleration with track length.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    Science.gov (United States)

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  12. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  13. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  14. Voice recognition versus transcriptionist: error rated and productivity in MRI reporting

    International Nuclear Information System (INIS)

    Strahan, Rodney H.; Schneider-Kolsky, Michal E.

    2010-01-01

    Full text: Purpose: Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Methods: Fifty MRI reports generated by VR and 50 finalised MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Results: Forty-two % and 30% of the finalised VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Conclusion: Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR.

  15. A general approach to error propagation

    International Nuclear Information System (INIS)

    Sanborn, J.B.

    1987-01-01

    A computational approach to error propagation is explained. It is shown that the application of the first-order Taylor theory to a fairly general expression representing an inventory or inventory-difference quantity leads naturally to a data structure that is useful for structuring error-propagation calculations. This data structure incorporates six types of data entities: (1) the objects in the material balance, (2) numerical parameters that describe these objects, (3) groups or sets of objects, (4) the terms which make up the material-balance equation, (5) the errors or sources of variance and (6) the functions or subroutines that represent Taylor partial derivatives. A simple algorithm based on this data structure can be defined using formulas that are sums of squares of sums. The data structures and algorithms described above have been implemented as computer software in FORTRAN for IBM PC-type machines. A free-form data-entry format allows users to separate data as they wish into separate files and enter data using a text editor. The program has been applied to the computation of limits of error for inventory differences (LEIDs) within the DOE complex. 1 ref., 3 figs

  16. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  17. Precise calculation of the transmission coefficient of a potential barrier. Study of the error in the B K W approximation; Calcul exact du coefficient de transmission d'une barriere de potentiel. Etude de l'erreur de l'approximation B K W

    Energy Technology Data Exchange (ETDEWEB)

    Jamet, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1964-07-01

    Following on to work started in a previous report, the author carries out in the case of few examples, the calculation of the transmission coefficient T using accurate methods. He then deduces from this the error in the B K W method. The calculations are carried out for values of T ranging down to 10{sup -200}. The use of modern computers makes it possible to obtain values of T to eight decimal places in a few seconds and the practical advantage of the B K W approximation appears therefore considerably reduced. The author gives also a method which may be used for an exact calculation of the energy levels of a potential well. (author) [French] Poursuivant une etude commencee dans une note anterieure, l'auteur effectue, sur quelques exemples, le calcul du coefficient de transmission T par des methodes exactes. Il en deduit ensuite l'erreur de la methode B K W. Les calculs sont faits pour des valeurs de T allant jusqu'a 10{sup -200}. L'utilisation des machines a calculer modernes permettant d'obtenir en quelques secondes, la valeur de T avec 8 decimales exactes, l'interet pratique de l'approximation B K W semble considerablement diminue. L'auteur indique egalement une methode qui peut servir a calculer exactement les niveaux d'energie d'un puits de potentiel. (auteur)

  18. The Impact of Harmonics Calculation Methods on Power Quality Assessment in Wind Farms

    DEFF Research Database (Denmark)

    Kocewiak, Lukasz Hubert; Hjerrild, Jesper; Bak, Claus Leth

    2010-01-01

    Different methods of calculating harmonics in measurements obtained from offshore wind farms are shown in this paper. Appropriate data processing methods are suggested for harmonics with different origin and nature. Enhancements of discrete Fourier transform application in order to reduce...... measurement data processing errors are proposed and compared with classical methods. Comparison of signal processing methods for harmonic studies is presented and application dependent on harmonics origin and nature recommended. Certain aspects related to magnitude and phase calculation in stationary...... measurement data are analysed and described. Qualitative indices of measurement data harmonic analysis in order to assess the calculation accuracy are suggested and used....

  19. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  20. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  1. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  2. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  3. Functional Independent Scaling Relation for ORR/OER Catalysts

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Dickens, Colin F.

    2016-01-01

    reactions. Here, we show that the oxygen-oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data...... and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largely cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange-correlation functional...

  4. Improvements on the calculation of the epithermal disadvantage factor for thermal nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Aboustta, Mohamed A.; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1997-12-01

    The disadvantage factor takes into account the neutron flux variation through the fuel cell. In the fuel the flux is depressed in relation to its level in the moderator region. In order to avoid detailed calculations for each different set of cell dimensions, which turns out necessary the development of problem-dependent neutron cross section libraries, a disadvantage factor based on a two-region equivalence theory was proposed for the EPRI-CELL code. However, it uses a rational approximation to the neutron escape probability to describe the neutron transport between cell regions. Such approximation allows the use of the equivalence principals but introduces a non negligible error which results in an underestimation of the cell neutron fluxes. A new proposed treatment, that will be presented in this work, remarkably improves the numerical calculation and reduces the error of the above mentioned method. (author). 4 refs., 2 figs.

  5. Improvements on the calculation of the epithermal disadvantage factor for thermal nuclear reactors

    International Nuclear Information System (INIS)

    Aboustta, Mohamed A.; Martinez, Aquilino S.

    1997-01-01

    The disadvantage factor takes into account the neutron flux variation through the fuel cell. In the fuel the flux is depressed in relation to its level in the moderator region. In order to avoid detailed calculations for each different set of cell dimensions, which turns out necessary the development of problem-dependent neutron cross section libraries, a disadvantage factor based on a two-region equivalence theory was proposed for the EPRI-CELL code. However, it uses a rational approximation to the neutron escape probability to describe the neutron transport between cell regions. Such approximation allows the use of the equivalence principals but introduces a non negligible error which results in an underestimation of the cell neutron fluxes. A new proposed treatment, that will be presented in this work, remarkably improves the numerical calculation and reduces the error of the above mentioned method. (author). 4 refs., 2 figs

  6. INFLUENCE OF MECHANICAL ERRORS IN A ZOOM CAMERA

    Directory of Open Access Journals (Sweden)

    Alfredo Gardel

    2011-05-01

    Full Text Available As it is well known, varying the focus and zoom of a camera lens system changes the alignment of the lens components resulting in a displacement of the image centre and field of view. Thus, knowledge of how the image centre shifts may be important for some aspects of camera calibration. As shown in other papers, the pinhole model is not adequate for zoom lenses. To ensure a calibration model for these lenses, the calibration parameters must be adjusted. The geometrical modelling of a zoom lens is realized from its lens specifications. The influence on the calibration parameters is calculated by introducing mechanical errors in the mobile lenses. Figures are given describing the errors obtained in the principal point coordinates and also in its standard deviation. A comparison is then made with the errors that come from the incorrect detection of the calibration points. It is concluded that mechanical errors of actual zoom lenses can be neglected in the calibration process because detection errors have more influence on the camera parameters.

  7. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  8. Wind power forecast error smoothing within a wind farm

    International Nuclear Information System (INIS)

    Saleck, Nadja; Bremen, Lueder von

    2007-01-01

    Smoothing of wind power forecast errors is well-known for large areas. Comparable effects within a wind farm are investigated in this paper. A Neural Network was taken to predict the power output of a wind farm in north-western Germany comprising 17 turbines. A comparison was done between an algorithm that fits mean wind and mean power data of the wind farm and a second algorithm that fits wind and power data individually for each turbine. The evaluation of root mean square errors (RMSE) shows that relative small smoothing effects occur. However, it can be shown for this wind farm that individual calculations have the advantage that only a few turbines are needed to give better results than the use of mean data. Furthermore different results occurred if predicted wind speeds are directly fitted to observed wind power or if predicted wind speeds are first fitted to observed wind speeds and then applied to a power curve. The first approach gives slightly better RMSE values, the bias improves considerably

  9. A Monte Carlo error simulation applied to calibration-free X-ray diffraction phase analysis

    International Nuclear Information System (INIS)

    Braun, G.E.

    1986-01-01

    Quantitative phase analysis of a system of n phases can be effected without the need for calibration standards provided at least n different mixtures of these phases are available. A series of linear equations relating diffracted X-ray intensities, weight fractions and quantitation factors coupled with mass balance relationships can be solved for the unknown weight fractions and factors. Uncertainties associated with the measured X-ray intensities, owing to counting of random X-ray quanta, are used to estimate the errors in the calculated parameters utilizing a Monte Carlo simulation. The Monte Carlo approach can be generalized and applied to any quantitative X-ray diffraction phase analysis method. Two examples utilizing mixtures of CaCO 3 , Fe 2 O 3 and CaF 2 with an α-SiO 2 (quartz) internal standard illustrate the quantitative method and corresponding error analysis. One example is well conditioned; the other is poorly conditioned and, therefore, very sensitive to errors in the measured intensities. (orig.)

  10. Development of Calculation Algorithm for ECCS Kinematic Shock

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Chan; Yoon, Duk-Joo; Ha, Sang-Jun [KHNP-CRI, Daejeon (Korea, Republic of)

    2014-10-15

    The void fraction of inverted U-pipes in front of SI(Safety Injection) pumps impact on the pipe system of ECCS(Emergency Core Cooling Systems). This phenomena is called as 'Kinematic Shock'. The purpose of this paper is to achieve the more exactly calculation when the kinematic shock is calculated by simplified equation. The behavior of the void packet of the ECCS pipes is illustrated by the simplified (other name is kinematic shock equation).. The kinematic shock is defined as the depth of total length of void clusters in the pipes of ECCS when the void cluster is continually reached along the part of pipes in vertical direction. In this paper, the simplified equation is evaluated by comparing calculation error each other.]. The more exact methods of calculating the depth of the kinematic shock in ECCS is achieved. The error of kinematic shock calculation is strongly depended on the calculation search gap and the order of Taylor's expansion. From this study, to select the suitable search gap and the suitable calculation order, differential root method, secant method, and Taylor's expansion form are compared one another.

  11. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  12. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  13. SOLGAS refined: A computerized thermodynamic equilibrium calculation tool

    Energy Technology Data Exchange (ETDEWEB)

    Trowbridge, L.D.; Leitnaker, J.M.

    1993-11-01

    SOLGAS, an early computer program for calculating equilibrium in a chemical system, has been made more user-friendly, and several{open_quote} bells and whistles{close_quotes} have been added. The necessity to include elemental species has been eliminated. The input of large numbers of starting conditions has been automated. A revised format for entering data simplifies and reduces chances for error. Calculated errors by SOLGAS are flagged, and several programming errors are corrected. Auxiliary programs are available to assemble and partially automate plotting of large amounts of data. Thermodynamic input data can be changed {open_quotes}on line.{close_quote} The program can be operated with or without a co-processor. Copies of the program, suitable for the IBM-PC or compatible with at least 384 bytes of low RAM, are available from the authors.

  14. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  15. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  16. Boundary integral method to calculate the sensitivity temperature error of microstructured fibre plasmonic sensors

    International Nuclear Information System (INIS)

    Esmaeilzadeh, Hamid; Arzi, Ezatollah; Légaré, François; Hassani, Alireza

    2013-01-01

    In this paper, using the boundary integral method (BIM), we simulate the effect of temperature fluctuation on the sensitivity of microstructured optical fibre (MOF) surface plasmon resonance (SPR) sensors. The final results indicate that, as the temperature increases, the refractometry sensitivity of our sensor decreases from 1300 nm/RIU at 0 °C to 1200 nm/RIU at 50 °C, leading to ∼7.7% sensitivity reduction and the sensitivity temperature error of 0.15% °C −1 for this case. These results can be used for biosensing temperature-error adjustment in MOF SPR sensors, since biomaterials detection usually happens in this temperature range. Moreover, the signal-to-noise ratio (SNR) of our sensor decreases from 0.265 at 0 °C to 0.154 at 100 °C with the average reduction rate of ∼0.42% °C −1 . The results suggest that at lower temperatures the sensor has a higher SNR. (paper)

  17. Calculation of the Nucleon Axial Form Factor Using Staggered Lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Aaron S. [Fermilab; Hill, Richard J. [Perimeter Inst. Theor. Phys.; Kronfeld, Andreas S. [Fermilab; Li, Ruizi [Indiana U.; Simone, James N. [Fermilab

    2016-10-14

    The nucleon axial form factor is a dominant contribution to errors in neutrino oscillation studies. Lattice QCD calculations can help control theory errors by providing first-principles information on nucleon form factors. In these proceedings, we present preliminary results on a blinded calculation of $g_A$ and the axial form factor using HISQ staggered baryons with 2+1+1 flavors of sea quarks. Calculations are done using physical light quark masses and are absolutely normalized. We discuss fitting form factor data with the model-independent $z$ expansion parametrization.

  18. Ranging error analysis of single photon satellite laser altimetry under different terrain conditions

    Science.gov (United States)

    Huang, Jiapeng; Li, Guoyuan; Gao, Xiaoming; Wang, Jianmin; Fan, Wenfeng; Zhou, Shihong

    2018-02-01

    Single photon satellite laser altimeter is based on Geiger model, which has the characteristics of small spot, high repetition rate etc. In this paper, for the slope terrain, the distance of error's formula and numerical calculation are carried out. Monte Carlo method is used to simulate the experiment of different terrain measurements. The experimental results show that ranging accuracy is not affected by the spot size under the condition of the flat terrain, But the inclined terrain can influence the ranging error dramatically, when the satellite pointing angle is 0.001° and the terrain slope is about 12°, the ranging error can reach to 0.5m. While the accuracy can't meet the requirement when the slope is more than 70°. Monte Carlo simulation results show that single photon laser altimeter satellite with high repetition rate can improve the ranging accuracy under the condition of complex terrain. In order to ensure repeated observation of the same point for 25 times, according to the parameters of ICESat-2, we deduce the quantitative relation between the footprint size, footprint, and the frequency repetition. The related conclusions can provide reference for the design and demonstration of the domestic single photon laser altimetry satellite.

  19. An error taxonomy system for analysis of haemodialysis incidents.

    Science.gov (United States)

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  20. Calculation of HTR-10 first criticality with MVP

    International Nuclear Information System (INIS)

    Xie Jiachun; Yao Lianying

    2015-01-01

    The first criticality of 10 MW pebble-bed high temperature gas-cooled reactor-test module (HTR-10) was calculated with MVP. According to the characteristics of HTR-10, the Statistical Geometry Model of MVP was employed to describe the random arrangement of coated fuel particles in the fuel pebbles and the random distribution of the fuel and dummy pebbles in the core. Compared with previous results from VSOP and MCNP, the MVP results with JENDL-3.3 library were little more different, but the results with ENDF/B-Ⅵ.8 library were very close. The relative errors were less than 0.7%, compared with the first criticality experimental results. The study shows that MVP could be used in the physics calculations for pebble bed high temperature gas-cooled reactors. (authors)

  1. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Agnese Capodieci

    2017-10-01

    Full Text Available Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory.Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined.Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers.Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  2. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  3. Sporadic error probability due to alpha particles in dynamic memories of various technologies

    International Nuclear Information System (INIS)

    Edwards, D.G.

    1980-01-01

    The sensitivity of MOS memory components to errors induced by alpha particles is expected to increase with integration level. The soft error rate of a 65-kbit VMOS memory has been compared experimentally with that of three field-proven 16-kbit designs. The technological and design advantages of the VMOS RAM ensure an error rate which is lower than those of the 16-kbit memories. Calculation of the error probability for the 65-kbit RAM and comparison with the measurements show that for large duty cycles single particle hits lead to sensing errors and for small duty cycles cell errors caused by multiple hits predominate. (Auth.)

  4. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  5. Parameter definition for reactor physics calculation of Obrigheim KWO PWR type reactor using the Gels and Erebus codes

    International Nuclear Information System (INIS)

    Faya, A.G.; Nakata, H.; Rodrigues, V.G.; Oosterkamp, W.J.

    1974-01-01

    The main variables for Obrigheim Reactor - KWO diffusion theory calculations, using the EREBUS code were defined. The variables under consideration were: mesh spacing for reactor description, time-step in burn-up calculation, and the temperature in both the moderator and the fuel. The best mesh spacing and time-step were defined considering the relative deviations and the computer time expended in each case. It has been verified that the error involved in the mean fuel temperature calculation (1317 0 K as given by SIEMENS and 1028 0 K as calculated by Dr. Penndorf) does not change substancially the calculation results

  6. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  7. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  8. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  9. Pilot Error in Air Carrier Mishaps: Longitudinal Trends Among 558 Reports, 1983–2002

    Science.gov (United States)

    Baker, Susan P.; Qiang, Yandong; Rebok, George W.; Li, Guohua

    2009-01-01

    Background Many interventions have been implemented in recent decades to reduce pilot error in flight operations. This study aims to identify longitudinal trends in the prevalence and patterns of pilot error and other factors in U.S. air carrier mishaps. Method National Transportation Safety Board investigation reports were examined for 558 air carrier mishaps during 1983–2002. Pilot errors and circumstances of mishaps were described and categorized. Rates were calculated per 10 million flights. Results The overall mishap rate remained fairly stable, but the proportion of mishaps involving pilot error decreased from 42% in 1983–87 to 25% in 1998–2002, a 40% reduction. The rate of mishaps related to poor decisions declined from 6.2 to 1.8 per 10 million flights, a 71% reduction; much of this decrease was due to a 76% reduction in poor decisions related to weather. Mishandling wind or runway conditions declined by 78%. The rate of mishaps involving poor crew interaction declined by 68%. Mishaps during takeoff declined by 70%, from 5.3 to 1.6 per 10 million flights. The latter reduction was offset by an increase in mishaps while the aircraft was standing, from 2.5 to 6.0 per 10 million flights, and during pushback, which increased from 0 to 3.1 per 10 million flights. Conclusions Reductions in pilot errors involving decision making and crew coordination are important trends that may reflect improvements in training and technological advances that facilitate good decisions. Mishaps while aircraft are standing and during push-back have increased and deserve special attention. PMID:18225771

  10. Shared dosimetry error in epidemiological dose-response analyses

    International Nuclear Information System (INIS)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-01-01

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of 'possible' dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed

  11. Characteristic parameters of drift chambers calculation

    International Nuclear Information System (INIS)

    Duran, I.; Martinez-Laso, L.

    1989-01-01

    We present here the methods we used to analyse the characteristic parameters of drift chambers. The algorithms to calculate the electric potential in any point for any drift chamber geometry are presented. We include the description of the programs used to calculate the electric field, the drift paths, the drift velocity and the drift time. The results and the errors are discussed. (Author) 7 refs

  12. Simple method of calculating the transient thermal performance of composite material and its applicable condition

    Institute of Scientific and Technical Information of China (English)

    张寅平; 梁新刚; 江忆; 狄洪发; 宁志军

    2000-01-01

    Degree of mixing of composite material is defined and the condition of using the effective thermal diffusivity for calculating the transient thermal performance of composite material is studied. The analytical result shows that for a prescribed precision of temperature, there is a condition under which the transient temperature distribution in composite material can be calculated by using the effective thermal diffusivity. As illustration, for the composite material whose temperatures of both ends are constant, the condition is presented and the factors affecting the relative error of calculated temperature of composite materials by using effective thermal diffusivity are discussed.

  13. To Error Problem Concerning Measuring Concentration of Carbon Oxide by Thermo-Chemical Sen

    Directory of Open Access Journals (Sweden)

    V. I. Nazarov

    2007-01-01

    Full Text Available The paper gives additional errors in respect of measuring concentration of carbon oxide by thermo-chemical sensors. A number of analytical expressions for calculation of error data and corrections for environmental factor deviations from admissible ones have been obtained in the paper

  14. Modelling vertical error in LiDAR-derived digital elevation models

    Science.gov (United States)

    Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.

    2010-01-01

    A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings

  15. Error signals in the subthalamic nucleus are related to post-error slowing in patients with Parkinson's disease

    NARCIS (Netherlands)

    Siegert, S.; Herrojo Ruiz, M.; Brücke, C.; Hueble, J.; Schneider, H.G.; Ullsperger, M.; Kühn, A.A.

    2014-01-01

    Error monitoring is essential for optimizing motor behavior. It has been linked to the medial frontal cortex, in particular to the anterior midcingulate cortex (aMCC). The aMCC subserves its performance-monitoring function in interaction with the basal ganglia (BG) circuits, as has been demonstrated

  16. Stepwise optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He

  17. Textbook Error: Short Circuiting on Electrochemical Cell

    Science.gov (United States)

    Bonicamp, Judith M.; Clark, Roy W.

    2007-01-01

    Short circuiting an electrochemical cell is an unreported but persistent error in the electrochemistry textbooks. It is suggested that diagrams depicting a cell delivering usable current to a load be postponed, the theory of open-circuit galvanic cells is explained, the voltages from the tables of standard reduction potentials is calculated and…

  18. Unintentional Pharmaceutical-Related Medication Errors Caused by Laypersons Reported to the Toxicological Information Centre in the Czech Republic.

    Science.gov (United States)

    Urban, Michal; Leššo, Roman; Pelclová, Daniela

    2016-07-01

    The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists. © 2016 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  19. Preventing treatment errors in radiotherapy by identifying and evaluating near misses and actual incidents

    International Nuclear Information System (INIS)

    Holmberg, Ola; McClean, Brendan

    2002-01-01

    The aim of this work is to study the effectiveness of a multilayered error prevention system by analysing both the near misses found at calculation-check stations and the actual treatment errors originating in the treatment preparation chain

  20. Statistical error in simulations of Poisson processes: Example of diffusion in solids

    Science.gov (United States)

    Nilsson, Johan O.; Leetmaa, Mikael; Vekilova, Olga Yu.; Simak, Sergei I.; Skorodumova, Natalia V.

    2016-08-01

    Simulations of diffusion in solids often produce poor statistics of diffusion events. We present an analytical expression for the statistical error in ion conductivity obtained in such simulations. The error expression is not restricted to any computational method in particular, but valid in the context of simulation of Poisson processes in general. This analytical error expression is verified numerically for the case of Gd-doped ceria by running a large number of kinetic Monte Carlo calculations.

  1. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  2. Voice recognition versus transcriptionist: error rates and productivity in MRI reporting.

    Science.gov (United States)

    Strahan, Rodney H; Schneider-Kolsky, Michal E

    2010-10-01

    Despite the frequent introduction of voice recognition (VR) into radiology departments, little evidence still exists about its impact on workflow, error rates and costs. We designed a study to compare typographical errors, turnaround times (TAT) from reported to verified and productivity for VR-generated reports versus transcriptionist-generated reports in MRI. Fifty MRI reports generated by VR and 50 finalized MRI reports generated by the transcriptionist, of two radiologists, were sampled retrospectively. Two hundred reports were scrutinised for typographical errors and the average TAT from dictated to final approval. To assess productivity, the average MRI reports per hour for one of the radiologists was calculated using data from extra weekend reporting sessions. Forty-two % and 30% of the finalized VR reports for each of the radiologists investigated contained errors. Only 6% and 8% of the transcriptionist-generated reports contained errors. The average TAT for VR was 0 h, and for the transcriptionist reports TAT was 89 and 38.9 h. Productivity was calculated at 8.6 MRI reports per hour using VR and 13.3 MRI reports using the transcriptionist, representing a 55% increase in productivity. Our results demonstrate that VR is not an effective method of generating reports for MRI. Ideally, we would have the report error rate and productivity of a transcriptionist and the TAT of VR. © 2010 The Authors. Journal of Medical Imaging and Radiation Oncology © 2010 The Royal Australian and New Zealand College of Radiologists.

  3. Calculation of coupling factor for double-period accelerating structure

    International Nuclear Information System (INIS)

    Bian Xiaohao; Chen Huaibi; Zheng Shuxin

    2005-01-01

    In the design of the linear accelerating structure, the coupling factor between cavities is a crucial parameter. The error of coupling factor accounts for the electric or magnetic field error mainly. To accurately design the coupling iris, the accurate calculation of coupling factor is essential. The numerical simulation is widely used to calculate the coupling factor now. By using MAFIA code, two methods have been applied to calculate the dispersion characteristics of the single-period structure, one method is to simulate the traveling wave mode by the period boundary condition; another method is to simulate the standing wave mode by the electrical boundary condition. In this work, the authors develop the two methods to calculate the coupling factor of double-period accelerating structure. Compared to experiment, the results for both methods are very similar, and in agreement with measurement within 15% deviation. (authors)

  4. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  5. Reliability and error analysis on xenon/CT CBF

    International Nuclear Information System (INIS)

    Zhang, Z.

    2000-01-01

    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  6. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    Science.gov (United States)

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  7. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    Science.gov (United States)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  8. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    Science.gov (United States)

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  9. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Xingming Sun

    2015-07-01

    Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  10. Recommendations to avoid gross errors of dose in radiotherapeutic treatments

    International Nuclear Information System (INIS)

    Souza, Cleber Nogueira de; Monti, Carlos Roberto; Sibata, Claudio Hissao

    2001-01-01

    Human mistakes are an important source of errors in radiotherapy and may occur at every step of the radiotherapy planning and treatment. To reduce this level of uncertainties, several specialized organizations have recommended a comprehensive quality assurance program. In Brazil, the requirement for these programs has been strongly stressed, and most radiotherapy services have pursued this goal regarding radiation units and dosimetry equipment, as well as the verification of the calculations of the patient's dose and the revision of the plan charts. As a contribution to the improvement of quality control, we present some recommendations to avoid failure of treatment due to error in the delivered dose, such as redundant check of the manual or computer calculations, weekly check of the total dose for each patient, and prevention of inadvertent access to any safety system of the equipment by any staff member that is only supposed to operate the machine. Moreover, the use of a computerized treatment record and verification system should be considered in order to eliminate errors due to incorrect selection of the treatment parameters, in a daily basis. We report four radioactive incidents with patient injuries occurred throughout the world and some gross errors of dose. (author)

  11. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    Science.gov (United States)

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  12. Relative efficiency calculation of a HPGe detector using MCNPX code

    International Nuclear Information System (INIS)

    Medeiros, Marcos P.C.; Rebello, Wilson F.; Lopes, Jose M.; Silva, Ademir X.

    2015-01-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a 60 Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate a HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a 60 Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  13. Correlation of errors in the Monte Carlo fission source and the fission matrix fundamental-mode eigenvector

    International Nuclear Information System (INIS)

    Dufek, Jan; Holst, Gustaf

    2016-01-01

    Highlights: • Errors in the fission matrix eigenvector and fission source are correlated. • The error correlations depend on coarseness of the spatial mesh. • The error correlations are negligible when the mesh is very fine. - Abstract: Previous studies raised a question about the level of a possible correlation of errors in the cumulative Monte Carlo fission source and the fundamental-mode eigenvector of the fission matrix. A number of new methods tally the fission matrix during the actual Monte Carlo criticality calculation, and use its fundamental-mode eigenvector for various tasks. The methods assume the fission matrix eigenvector is a better representation of the fission source distribution than the actual Monte Carlo fission source, although the fission matrix and its eigenvectors do contain statistical and other errors. A recent study showed that the eigenvector could be used for an unbiased estimation of errors in the cumulative fission source if the errors in the eigenvector and the cumulative fission source were not correlated. Here we present new numerical study results that answer the question about the level of the possible error correlation. The results may be of importance to all methods that use the fission matrix. New numerical tests show that the error correlation is present at a level which strongly depends on properties of the spatial mesh used for tallying the fission matrix. The error correlation is relatively strong when the mesh is coarse, while the correlation weakens as the mesh gets finer. We suggest that the coarseness of the mesh is measured in terms of the value of the largest element in the tallied fission matrix as that way accounts for the mesh as well as system properties. In our test simulations, we observe only negligible error correlations when the value of the largest element in the fission matrix is about 0.1. Relatively strong error correlations appear when the value of the largest element in the fission matrix raises

  14. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  15. Preventing treatment errors in radiotherapy by identifying and evaluating near misses and actual incidents

    LENUS (Irish Health Repository)

    Holmberg, Ola

    2002-06-01

    When preparing radiation treatment, the prescribed dose and irradiation geometry must be translated into physical machine parameters. An error in the calculations or machine settings can negatively affect the intended treatment outcome. Analysing incidents originating in the treatment preparation chain makes it possible to find weak links and prevent treatment errors. The aim of this work is to study the effectiveness of a multilayered error prevention system by analysing both near misses and actual treatment errors.

  16. Investigating the Factors Affecting the Occurrence and Reporting of Medication Errors from the Viewpoint of Nurses in Sina Hospital, Tabriz, Iran

    Directory of Open Access Journals (Sweden)

    Massumeh gholizadeh

    2016-09-01

    Full Text Available Background and objectives: Medication errors can cause serious problems to patients and health system. Initial results of medication errors increase duration of hospitalization and costs. The aim of this study was to determine the reasons of medication errors and the barriers of errors reporting from nurses’ viewpoints. Material and Methods: A cross-sectional descriptive study was conducted in 2013. The study population included all of the nurses working in Tabriz Sina hospital. Study sample was calculated 124 by census method. The data collection tool was questionnaire and data were analyzed using SPSS software version 20 package. Results: In this study, from the viewpoint of nurses, the most important reasons of medication errors included the wrong infusion speed, illegible medication orders, work-related fatigue, noise of ambient and shortages of staff.  Regarding barriers of error reporting, the most important factors were the emphasis of the directors on the person regardless of other factors involved in medication errors and the lake of a clear definition of medication errors. Conclusion: Given the importance of ensuring patient safety, the following corrections can lead to improvement of hospital safety: establishing an effective system for reporting and recording errors, minimizing barriers to reporting by establishing a positive relationship between managers and staff and positive reaction towards reporting error. To reduce medication errors, establishing training classes in relation to drugs information for nurses and continuing evaluation of personnel in the field of drug information using the results of pharmaceutical information in the ward are recommended.

  17. Propagation of errors from the sensitivity image in list mode reconstruction

    International Nuclear Information System (INIS)

    Qi, Jinyi; Huesman, Ronald H.

    2003-01-01

    List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we theoretically analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The non-negativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result provides insights on what kind of error might be allowable in the sensitivity image. Computer simulations show that the theoretical results are in good agreement with the measured results

  18. Comparison of Conductor-Temperature Calculations Based on Different Radial-Position-Temperature Detections for High-Voltage Power Cable

    Directory of Open Access Journals (Sweden)

    Lin Yang

    2018-01-01

    Full Text Available In this paper, the calculation of the conductor temperature is related to the temperature sensor position in high-voltage power cables and four thermal circuits—based on the temperatures of insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface are established to calculate the conductor temperature. To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum are built up, and thermocouples are placed at the four radial positions in a 110 kV cross-linked polyethylene (XLPE insulated power cable to measure the temperatures of four positions. In measurements, six cases of current heating test under three laying environments, such as duct, water, and backfilled soil were carried out. Both errors of the conductor temperature calculation and the simulation based on the temperature of insulation shield were significantly smaller than others under all laying environments. It is the uncertainty of the thermal resistivity, together with the difference of the initial temperature of each radial position by the solar radiation, which led to the above results. The thermal capacitance of the air has little impact on errors. The thermal resistance of the air gap is the largest error source. Compromising the temperature-estimation accuracy and the insulation-damage risk, the waterproof compound is the recommended sensor position to improve the accuracy of conductor-temperature calculation. When the thermal resistances were calculated correctly, the aluminum sheath is also the recommended sensor position besides the waterproof compound.

  19. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  20. Classification of Error Related Brain Activity in an Auditory Identification Task with Conditions of Varying Complexity

    Science.gov (United States)

    Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.

    2017-11-01

    The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.

  1. Evaluation of rotational set-up errors in patients with thoracic neoplasms

    International Nuclear Information System (INIS)

    Wang Yanyang; Fu Xiaolong; Xia Bing; Fan Min; Yang Huanjun; Ren Jun; Xu Zhiyong; Jiang Guoliang

    2010-01-01

    Objective: To assess the rotational set-up errors in patients with thoracic neoplasms. Methods: 224 kilovoltage cone-beam computed tomography (KVCBCT) scans from 20 thoracic tumor patients were evaluated retrospectively. All these patients were involved in the research of 'Evaluation of the residual set-up error for online kilovoltage cone-beam CT guided thoracic tumor radiation'. Rotational set-up errors, including pitch, roll and yaw, were calculated by 'aligning the KVCBCT with the planning CT, using the semi-automatic alignment method. Results: The average rotational set-up errors were -0.28 degree ±1.52 degree, 0.21 degree ± 0.91 degree and 0.27 degree ± 0.78 degree in the left-fight, superior-inferior and anterior-posterior axis, respectively. The maximal rotational errors of pitch, roll and yaw were 3.5 degree, 2.7 degree and 2.2 degree, respectively. After correction for translational set-up errors, no statistically significant changes in rotational error were observed. Conclusions: The rotational set-up errors in patients with thoracic neoplasms were all small in magnitude. Rotational errors may not change after the correction for translational set-up errors alone, which should be evaluated in a larger sample future. (authors)

  2. Calculation of resonance integral for fuel cluster

    International Nuclear Information System (INIS)

    Remsak, S.

    1969-01-01

    The procedure for calculating the shielding correction, formulated in the previous paper [6], was broadened and applied for a cluster of cylindrical rods. The sam analytical method as in the previous paper was applied. A combination of Gauss method with the method of Almgren and Porn used for solving the same type of integral was used to calculate the geometry functions. CLUSTER code was written for ZUSE-Z-23 computer to calculate the shielding corrections for pairs of fuel rods in the cluster. Computing time for one pair of fuel rods depends on the number of closely placed rod, and for two closely placed rods it is about 3 hours. Calculations were done for clusters containing 7 and 19 UO 2 rods. results show that calculated values of resonance integrals are somewhat higher than the values obtained by Helstrand empirical formula. Taking into account the results for two rods from the previous paper it can be noted that the calculated and empirical values for clusters with 2 and 7 rods are in agreement since the deviations do not exceed the limits of experimental error (±2%). In case of larger cluster with 19 rods deviations are higher than the experimental error. Most probably the calculated values exceed the experimental ones result from the fact that in this paper the shielding correction is calculated only in the region up to 1 keV [sr

  3. Enamel dose calculation by electron paramagnetic resonance spectral simulation technique

    International Nuclear Information System (INIS)

    Dong Guofu; Cong Jianbo; Guo Linchao; Ning Jing; Xian Hong; Wang Changzhen; Wu Ke

    2011-01-01

    Objective: To optimize the enamel electron paramagnetic resonance (EPR) spectral processing by using the EPR spectral simulation method to improve the accuracy of enamel EPR dosimetry and reduce artificial error. Methods: The multi-component superimposed EPR powder spectral simulation software was developed to simulate EPR spectrum models of the background signal (BS) and the radiation- induced signal (RS) of irradiated enamel respectively. RS was extracted from the multi-component superimposed spectrum of irradiated enamel and its amplitude was calculated. The dose-response curve was then established for calculating the doses of a group of enamel samples. The result of estimated dose was compared with that calculated by traditional method. Results: BS was simulated as a powder spectrum of gaussian line shape with the following spectrum parameters: g=2.00 35 and Hpp=0.65-1.1 mT, RS signal was also simulated as a powder spectrum but with axi-symmetric spectrum characteristics. The spectrum parameters of RS were: g ⊥ =2.0018, g ‖ =1.9965, Hpp=0.335-0.4 mT. The amplitude of RS had a linear response to radiation dose with the regression equation as y=240.74x + 76 724 (R 2 =0.9947). The expectation of relative error of dose estimation was 0.13. Conclusions: EPR simulation method has improved somehow the accuracy and reliability of enamel EPR dose estimation. (authors)

  4. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  5. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  6. A national physician survey of diagnostic error in paediatrics.

    Science.gov (United States)

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  7. Exchange–correlation errors at harmonic and anharmonic orders

    Indian Academy of Sciences (India)

    As an aid towards improving the treatment of exchange and correlation effects in electronic structure calculations, it is desirable to have a clear picture of the errors introduced by currently popular approximate exchange–correlation functionals. We have performed ab initio density functional theory and density functional ...

  8. The error analysis of the reverse saturation current of the diode in the modeling of photovoltaic modules

    International Nuclear Information System (INIS)

    Wang, Gang; Zhao, Ke; Qiu, Tian; Yang, Xinsheng; Zhang, Yong; Zhao, Yong

    2016-01-01

    In the modeling and simulation of photovoltaic modules, especially in calculating the reverse saturation current of the diode, the series and parallel resistances are often neglected, causing certain errors. We analyzed the errors at the open circuit point, and proposed an iterative algorithm to calculate the modified values of the reverse saturation current, series resistance and parallel resistance of the diode, in order to reduce the errors. Assuming independent irradiation and temperature effects, the irradiation-dependence and the temperature-dependence of the open circuit voltage were introduced to obtain the modified formula of the open circuit voltage under any condition. Experimental results show that this modified formula has high accuracy, even at irradiance as low as 40 W/m"2. The errors of open circuit voltage were significantly reduced, indicating that this modified model is suitable for simulations of photovoltaic modules. - Highlights: • We propose a new method for modeling PV modules with higher accuracy. • The errors of open circuit voltage are significantly reduced. • I_o under any condition is calculated.

  9. Transmission efficiency of neutron guide tube with alignment errors

    International Nuclear Information System (INIS)

    Kawabata, Yuji; Suzuki, Masatoshi; Sakamoto, Masanobu; Harami, Taikan; Takahashi, Hidetake; Onishi, Nobuaki

    1990-01-01

    The experimental studies on the neutron transmission efficiencies of neutron guide tubes were carried out by using thermal neutrons from the JAERI electron linac. The neutron guide tube facility on a large scale have been planned on the reconstructed JRR-3 in JAERI. The neutron efficiencies of the 1/10 scale neutron guide tube, which is 2 mm width and 1.8 m length, with and without appreciable alignment errors were studied to evaluate the efficiencies of the planned ones. Calculated results by the Neutron Guide Tube Analysis Code 'NEUGT' were also assessed by these neutron experiments. The experimental results agree well with the calculated results by 'NEUGT' even with alignment errors. From this experimental study, the efficiency of the planned neutron guide tubes is estimated to be good enough for the neutron beam experiments. (author)

  10. Calculations of magnetic field errors caused by mechanical accuracy at infra-red undulator construction

    International Nuclear Information System (INIS)

    Matyushevskij, E.A.; Morozov, N.A.; Syresin, E.M.

    2005-01-01

    At the Joint Institute for Nuclear Research (Dubna) the electromagnetic undulator with maximal magnetic field 1.2 T and 40 cm period is under development. The computer models for the undulator magnet system were realized on the basis of POISSON and RADIA codes. The undulator magnetic field imperfections due to the design errors were simulated by the models

  11. First-row diatomics: Calculation of the geometry and energetics using self-consistent gradient-functional approximations

    International Nuclear Information System (INIS)

    Kutzler, F.W.; Painter, G.S.

    1992-01-01

    A fully self-consistent series of nonlocal (gradient) density-functional calculations has been carried out using the augmented-Gaussian-orbital method to determine the magnitude of gradient corrections to the potential-energy curves of the first-row diatomics, Li 2 through F 2 . Both the Langreth-Mehl-Hu and the Perdew-Wang gradient-density functionals were used in calculations of the binding energy, bond length, and vibrational frequency for each dimer. Comparison with results obtained in the local-spin-density approximation (LSDA) using the Vosko-Wilk-Nusair functional, and with experiment, reveals that bond lengths and vibrational frequencies are rather insensitive to details of the gradient functionals, including self-consistency effects, but the gradient corrections reduce the overbinding commonly observed in the LSDA calculations of first-row diatomics (with the exception of Li 2 , the gradient-functional binding-energy error is only 50--12 % of the LSDA error). The improved binding energies result from a large differential energy lowering, which occurs in open-shell atoms relative to the diatomics. The stabilization of the atom arises from the use of nonspherical charge and spin densities in the gradient-functional calculations. This stabilization is negligibly small in LSDA calculations performed with nonspherical densities

  12. Monte Carlo dose calculations for phantoms with hip prostheses

    International Nuclear Information System (INIS)

    Bazalova, M; Verhaegen, F; Coolens, C; Childs, P; Cury, F; Beaulieu, L

    2008-01-01

    Computed tomography (CT) images of patients with hip prostheses are severely degraded by metal streaking artefacts. The low image quality makes organ contouring more difficult and can result in large dose calculation errors when Monte Carlo (MC) techniques are used. In this work, the extent of streaking artefacts produced by three common hip prosthesis materials (Ti-alloy, stainless steel, and Co-Cr-Mo alloy) was studied. The prostheses were tested in a hypothetical prostate treatment with five 18 MV photon beams. The dose distributions for unilateral and bilateral prosthesis phantoms were calculated with the EGSnrc/DOSXYZnrc MC code. This was done in three phantom geometries: in the exact geometry, in the original CT geometry, and in an artefact-corrected geometry. The artefact-corrected geometry was created using a modified filtered back-projection correction technique. It was found that unilateral prosthesis phantoms do not show large dose calculation errors, as long as the beams miss the artefact-affected volume. This is possible to achieve in the case of unilateral prosthesis phantoms (except for the Co-Cr-Mo prosthesis which gives a 3% error) but not in the case of bilateral prosthesis phantoms. The largest dose discrepancies were obtained for the bilateral Co-Cr-Mo hip prosthesis phantom, up to 11% in some voxels within the prostate. The artefact correction algorithm worked well for all phantoms and resulted in dose calculation errors below 2%. In conclusion, a MC treatment plan should include an artefact correction algorithm when treating patients with hip prostheses

  13. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  14. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  15. Cultural differences in categorical memory errors persist with age.

    Science.gov (United States)

    Gutchess, Angela; Boduroglu, Aysecan

    2018-01-02

    This cross-sectional experiment examined the influence of aging on cross-cultural differences in memory errors. Previous research revealed that Americans committed more categorical memory errors than Turks; we tested whether the cognitive constraints associated with aging impacted the pattern of memory errors across cultures. Furthermore, older adults are vulnerable to memory errors for semantically-related information, and we assessed whether this tendency occurs across cultures. Younger and older adults from the US and Turkey studied word pairs, with some pairs sharing a categorical relationship and some unrelated. Participants then completed a cued recall test, generating the word that was paired with the first. These responses were scored for correct responses or different types of errors, including categorical and semantic. The tendency for Americans to commit more categorical memory errors emerged for both younger and older adults. In addition, older adults across cultures committed more memory errors, and these were for semantically-related information (including both categorical and other types of semantic errors). Heightened vulnerability to memory errors with age extends across cultural groups, and Americans' proneness to commit categorical memory errors occurs across ages. The findings indicate some robustness in the ways that age and culture influence memory errors.

  16. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  17. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  18. Performance Errors in Weight Training and Their Correction.

    Science.gov (United States)

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  19. Quantifying the Error Associated with Alternative GIS-based Techniques to Measure Access to Health Care Services

    Directory of Open Access Journals (Sweden)

    Amy Mizen

    2015-11-01

    Full Text Available The aim of this study was to quantify the error associated with different accessibility methods commonly used by public health researchers. Network distances were calculated from each household to the nearest GP our study area in the UK. Household level network distances were assigned as the gold standard and compared to alternate widely used accessibility methods. Four spatial aggregation units, two centroid types and two distance calculation methods represent commonly used accessibility calculation methods. Spearman's rank coefficients were calculated to show the extent which distance measurements were correlated with the gold standard. We assessed the proportion of households that were incorrectly assigned to GP for each method. The distance method, level of spatial aggregation and centroid type were compared between urban and rural regions. Urban distances were less varied from the gold standard, with smaller errors, compared to rural regions. For urban regions, Euclidean distances are significantly related to network distances. Network distances assigned a larger proportion of households to the correct GP compared to Euclidean distances, for both urban and rural morphologies. Our results, stratified by urban and rural populations, explain why contradicting results have been reported in the literature. The results we present are intended to be used aide-memoire by public health researchers using geographical aggregated data in accessibility research.

  20. How Rational Are Inflation Expectations? A Vector Autoregression Decomposition of Inflation Forecasts and Their Errors

    National Research Council Canada - National Science Library

    Ladvogt, Timothy

    2002-01-01

    ... the persistence of forecast errors. A reduced form VAR is used to identify potential inefficiencies and then calculate the impulse response functions and variance decompositions of forecasts errors to analyze how shocks to the other endogenous...

  1. On calculation of lattice parameters of refractory metal solid solutions

    International Nuclear Information System (INIS)

    Barsukov, A.D.; Zhuravleva, A.D.; Pedos, A.A.

    1995-01-01

    Technique for calculating lattice periods of solid solutions is suggested. Experimental and calculation values of lattice periods of some solid solutions on the basis of refractory metals (V-Cr, Nb-Zr, Mo-W and other) are presented. Calculation error was correlated with experimental one. 7 refs.; 2 tabs

  2. Segmentation error and macular thickness measurements obtained with spectral-domain optical coherence tomography devices in neovascular age-related macular degeneration

    Directory of Open Access Journals (Sweden)

    Moosang Kim

    2013-01-01

    Full Text Available Purpose: To evaluate frequency and severity of segmentation errors of two spectral-domain optical coherence tomography (SD-OCT devices and error effect on central macular thickness (CMT measurements. Materials and Methods: Twenty-seven eyes of 25 patients with neovascular age-related macular degeneration, examined using the Cirrus HD-OCT and Spectralis HRA + OCT, were retrospectively reviewed. Macular cube 512 × 128 and 5-line raster scans were performed with the Cirrus and 512 × 25 volume scans with the Spectralis. Frequency and severity of segmentation errors were compared between scans. Results: Segmentation error frequency was 47.4% (baseline, 40.7% (1 month, 40.7% (2 months, and 48.1% (6 months for the Cirrus, and 59.3%, 62.2%, 57.8%, and 63.7%, respectively, for the Spectralis, differing significantly between devices at all examinations (P < 0.05, except at baseline. Average error score was 1.21 ± 1.65 (baseline, 0.79 ± 1.18 (1 month, 0.74 ± 1.12 (2 months, and 0.96 ± 1.11 (6 months for the Cirrus, and 1.73 ± 1.50, 1.54 ± 1.35, 1.38 ± 1.40, and 1.49 ± 1.30, respectively, for the Spectralis, differing significantly at 1 month and 2 months (P < 0.02. Automated and manual CMT measurements by the Spectralis were larger than those by the Cirrus. Conclusions: The Cirrus HD-OCT had a lower frequency and severity of segmentation error than the Spectralis HRA + OCT. SD-OCT error should be considered when evaluating retinal thickness.

  3. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  4. State-independent error-disturbance trade-off for measurement operators

    International Nuclear Information System (INIS)

    Zhou, S.S.; Wu, Shengjun; Chau, H.F.

    2016-01-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions – one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  5. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  6. Comparison of RESRAD with hand calculations

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1995-09-01

    This report is a continuation of an earlier comparison done with two other computer programs, GENII and PATHRAE. The dose calculations by the two programs were compared with each other and with hand calculations. These band calculations have now been compared with RESRAD Version 5.41 to examine the use of standard models and parameters in this computer program. The hand calculations disclosed a significant computational error in RESRAD. The Pu-241 ingestion doses are five orders of magnitude too small. In addition, the external doses from some nuclides differ greatly from expected values. Both of these deficiencies have been corrected in later versions of RESRAD

  7. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  8. Asteroid orbital error analysis: Theory and application

    Science.gov (United States)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  9. Step-by-step optimization and global chaos of nonlinear parameters in exact calculations of few-particle systems

    International Nuclear Information System (INIS)

    Frolov, A.M.

    1986-01-01

    Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of ppμ, ddμ, ttμ homonuclear mesomolecules within the error ≤±0.001 eV. The global chaos method turned out to be well applicable to nuclear 3 H and 3 He systems

  10. A Comparison of the American Society of Cataract and Refractive Surgery post-myopic LASIK/PRK Intraocular Lens (IOL calculator and the Ocular MD IOL calculator

    Directory of Open Access Journals (Sweden)

    Hsu M

    2011-09-01

    Full Text Available David L DeMill1, Majid Moshirfar1, Marcus C Neuffer1, Maylon Hsu1, Shameema Sikder21John A Moran Eye Center, University of Utah, Salt Lake City, UT, USA; 2Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, USABackground: To compare the average values of the American Society of Cataract and Refractive Surgery (ASCRS and Ocular MD intraocular lens (IOL calculators to assess their accuracy in predicting IOL power in patients with prior laser-in-situ keratomileusis (LASIK or photorefractive keratectomy.Methods: In this retrospective study, data from 21 eyes with previous LASIK or photorefractive keratectomy for myopia and subsequent cataract surgery was used in an IOL calculator comparison. The predicted IOL powers of the Ocular MD SRK/T, Ocular MD Haigis, and ASCRS averages were compared. The Ocular MD average (composed of an average of Ocular MD SRK/T and Ocular MD Haigis and the all calculator average (composed of an average of Ocular MD SRK/T, Ocular MD Haigis, and ASCRS were also compared. Primary outcome measures were mean arithmetic and absolute IOL prediction error, variance in mean arithmetic IOL prediction error, and the percentage of eyes within ±0.50 and ±1.00 D.Results: The Ocular MD SRK/T and Ocular MD Haigis averages produced mean arithmetic IOL prediction errors of 0.57 and –0.61 diopters (D, respectively, which were significantly larger than errors from the ASCRS, Ocular MD, and all calculator averages (0.11, –0.02, and 0.02 D, respectively, all P < 0.05. There was no statistically significant difference between the methods in absolute IOL prediction error, variance, or the percentage of eyes with outcomes within ±0.50 and ±1.00 D.Conclusion: The ASCRS average was more accurate in predicting IOL power than the Ocular MD SRK/T and Ocular MD Haigis averages alone. Our methods using combinations of these averages which, when compared with the individual averages, showed a trend of decreased mean arithmetic IOL

  11. Reducing Systematic Errors in Oxide Species with Density Functional Theory Calculations

    DEFF Research Database (Denmark)

    Christensen, Rune; Hummelshøj, Jens S.; Hansen, Heine Anton

    2015-01-01

    Density functional theory calculations can be used to gain valuable insight into the fundamental reaction processes in metal−oxygen systems, e.g., metal−oxygen batteries. Here, the ability of a range of different exchange-correlation functionals to reproduce experimental enthalpies of formation...

  12. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  13. Relative Pose Estimation and Accuracy Verification of Spherical Panoramic Image

    Directory of Open Access Journals (Sweden)

    XIE Donghai

    2017-11-01

    Full Text Available This paper improves the method of the traditional 5-point relative pose estimation algorithm, and proposes a relative pose estimation algorithm which is suitable for spherical panoramic images. The algorithm firstly computes the essential matrix, then decomposes the essential matrix to obtain the rotation matrix and the translation vector using SVD, and finally the reconstructed three-dimensional points are used to eliminate the error solution. The innovation of the algorithm lies the derivation of panorama epipolar formula and the use of the spherical distance from the point to the epipolar plane as the error term for the spherical panorama co-planarity function. The simulation experiment shows that when the random noise of the image feature points is within the range of pixel, the error of the three Euler angles is about 0.1°, and the error between the relative translational displacement and the simulated value is about 1.5°. The result of the experiment using the data obtained by the vehicle panorama camera and the POS shows that:the error of the roll angle and pitch angle can be within 0.2°, the error of the heading angle can be within 0.4°, and the error between the relative translational displacement and the POS can be within 2°. The result of our relative pose estimation algorithm is used to generate the spherical panoramic epipolar images, then we extract the key points between the spherical panoramic images and calculate the errors in the column direction. The result shows that the errors is less than 1 pixel.

  14. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  15. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  16. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    Science.gov (United States)

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  17. Representing cognitive activities and errors in HRA trees

    International Nuclear Information System (INIS)

    Gertman, D.I.

    1992-01-01

    A graphic representation method is presented herein for adapting an existing technology--human reliability analysis (HRA) event trees, used to support event sequence logic structures and calculations--to include a representation of the underlying cognitive activity and corresponding errors associated with human performance. The analyst is presented with three potential means of representing human activity: the NUREG/CR-1278 HRA event tree approach; the skill-, rule- and knowledge-based paradigm; and the slips, lapses, and mistakes paradigm. The above approaches for representing human activity are integrated in order to produce an enriched HRA event tree -- the cognitive event tree system (COGENT)-- which, in turn, can be used to increase the analyst's understanding of the basic behavioral mechanisms underlying human error and the representation of that error in probabilistic risk assessment. Issues pertaining to the implementation of COGENT are also discussed

  18. Comparison of calculated integral values using measured and calculated neutron spectra for fusion neutronics analyses

    International Nuclear Information System (INIS)

    Sekimoto, H.

    1987-01-01

    The kerma heat production density, tritum production density, and dose in a lithium-fluoride pile with a deuterium-tritum neutron source were calculated with a data processing code, UFO, from the pulse height distribution of a miniature NE213 neutron spectrometer, and compared with the values calculated with a Monte Carlo code, MORSE-CV. Both the UFO and MORSE-CV values agreed with the statistical error (less than 6%) of the MORSE-CV calculations, except for the outer-most point in the pile. The MORSE-CV values were slightly smaller than the UFO values for almost all cases, and this tendency increased with increasing distance from the neutron source

  19. Comparison of ETF´s performance related to the tracking error

    Directory of Open Access Journals (Sweden)

    Michaela Dorocáková

    2017-12-01

    Full Text Available With the development of financial markets, there is also immediate expansion of fund industry, which is a representative issue of collective investment. The purpose of index funds is to replicate returns and risk of underling index to the largest possible extent, with tracking error being one of the most monitored performance indicator of these passively managed funds. The aim of this paper is to describe several perspectives concerning indexing, index funds and exchange-traded funds, to explain the issue of tracking error with its examination and subsequent comparison of such funds provided by leading investment management companies with regard to different methods used for its evaluation. Our research shows that the decisive factor for occurrence of copy deviation is fund size and fund´s stock consolidation. In addition, performance differences between exchange-traded fund and its benchmark tend to show the signs of seasonality in the sense of increasing in the last months of a year.

  20. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205