WorldWideScience

Sample records for relative calculation error

  1. A Neural Circuit Mechanism for the Involvements of Dopamine in Effort-Related Choices: Decay of Learned Values, Secondary Effects of Depletion, and Calculation of Temporal Difference Error

    Science.gov (United States)

    2018-01-01

    Abstract Dopamine has been suggested to be crucially involved in effort-related choices. Key findings are that dopamine depletion (i) changed preference for a high-cost, large-reward option to a low-cost, small-reward option, (ii) but not when the large-reward option was also low-cost or the small-reward option gave no reward, (iii) while increasing the latency in all the cases but only transiently, and (iv) that antagonism of either dopamine D1 or D2 receptors also specifically impaired selection of the high-cost, large-reward option. The underlying neural circuit mechanisms remain unclear. Here we show that findings i–iii can be explained by the dopaminergic representation of temporal-difference reward-prediction error (TD-RPE), whose mechanisms have now become clarified, if (1) the synaptic strengths storing the values of actions mildly decay in time and (2) the obtained-reward-representing excitatory input to dopamine neurons increases after dopamine depletion. The former is potentially caused by background neural activity–induced weak synaptic plasticity, and the latter is assumed to occur through post-depletion increase of neural activity in the pedunculopontine nucleus, where neurons representing obtained reward exist and presumably send excitatory projections to dopamine neurons. We further show that finding iv, which is nontrivial given the suggested distinct functions of the D1 and D2 corticostriatal pathways, can also be explained if we additionally assume a proposed mechanism of TD-RPE calculation, in which the D1 and D2 pathways encode the values of actions with a temporal difference. These results suggest a possible circuit mechanism for the involvements of dopamine in effort-related choices and, simultaneously, provide implications for the mechanisms of TD-RPE calculation. PMID:29468191

  2. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  3. Error estimation for variational nodal calculations

    International Nuclear Information System (INIS)

    Zhang, H.; Lewis, E.E.

    1998-01-01

    Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations

  4. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  5. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  6. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  8. Relative Hazard Calculation Methodology

    International Nuclear Information System (INIS)

    DL Strenge; MK White; RD Stenner; WB Andrews

    1999-01-01

    The methodology presented in this document was developed to provide a means of calculating the RH ratios to use in developing useful graphic illustrations. The RH equation, as presented in this methodology, is primarily a collection of key factors relevant to understanding the hazards and risks associated with projected risk management activities. The RH equation has the potential for much broader application than generating risk profiles. For example, it can be used to compare one risk management activity with another, instead of just comparing it to a fixed baseline as was done for the risk profiles. If the appropriate source term data are available, it could be used in its non-ratio form to estimate absolute values of the associated hazards. These estimated values of hazard could then be examined to help understand which risk management activities are addressing the higher hazard conditions at a site. Graphics could be generated from these absolute hazard values to compare high-hazard conditions. If the RH equation is used in this manner, care must be taken to specifically define and qualify the estimated absolute hazard values (e.g., identify which factors were considered and which ones tended to drive the hazard estimation)

  9. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  10. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  11. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  12. Error analysis of pupils in calculating with fractions

    OpenAIRE

    Uranič, Petra

    2016-01-01

    In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...

  13. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  14. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  15. Calculation of track and vertex errors for detector design studies

    International Nuclear Information System (INIS)

    Harr, R.

    1995-01-01

    The Kalman Filter technique has come into wide use for charged track reconstruction in high-energy physics experiments. It is also well suited for detector design studies, allowing for the efficient estimation of optimal track covariance matrices without the need of a hit level Monte Carlo simulation. Although much has been published about the Kalman filter equations, there is a lack of previous literature explaining how to implement the equations. In this paper, the operators necessary to implement the Kalman filter equations for two common detector configurations are worked out: a central detector in a uniform solenoidal magnetic field, and a fixed-target detector with no magnetic field in the region of the interactions. With the track covariance matrices in hand, vertex and invariant mass errors are readily calculable. These quantities are particularly interesting for evaluating experiments designed to study weakly decaying particles which give rise to displaced vertices. The optimal vertex errors are obtained via a constrained vertex fit. Solutions are presented to the constrained vertex problem with and without kinematic constraints. Invariant mass errors are obtained via propagation of errors; the use of vertex constrained track parameters is discussed. Many of the derivations are new or previously unpublished

  16. Calculation of magnetic error fields in hybrid insertion devices

    International Nuclear Information System (INIS)

    Savoy, R.; Halbach, K.; Hassenzahl, W.; Hoyer, E.; Humphries, D.; Kincaid, B.

    1989-08-01

    The Advanced Light Source (ALS) at the Lawrence Berkeley Laboratory requires insertion devices with fields sufficiently accurate to take advantage of the small emittance of the ALS electron beam. To maintain the spectral performance of the synchrotron radiation and to limit steering effects on the electron beam these errors must be smaller than 0.25%. This paper develops a procedure for calculating the steering error due to misalignment of the easy axis of the permanent magnet material. The procedure is based on a three dimensional theory of the design of hybrid insertion devices developed by one of us. The acceptable tolerance for easy axis misalignment is found for a 5 cm period undulator proposed for the ALS. 11 refs., 5 figs

  17. Error reduction techniques for Monte Carlo neutron transport calculations

    International Nuclear Information System (INIS)

    Ju, J.H.W.

    1981-01-01

    Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas

  18. Repair for scattering expansion truncation errors in transport calculations

    International Nuclear Information System (INIS)

    Emmett, M.B.; Childs, R.L.; Rhoades, W.A.

    1980-01-01

    Legendre expansion of angular scattering distributions is usually limited to P 3 in practical transport calculations. This truncation often results in non-trivial errors, especially alternating negative and positive lateral scattering peaks. The effect is especially prominent in forward-peaked situations such as the within-group component of the Compton Scattering of gammas. Increasing the expansion to P 7 often makes the peaks larger and narrower. Ward demonstrated an accurate repair, but his method requires special cross section sets and codes. The DOT IV code provides fully-compatible, but heuristic, repair of the erroneous scattering. An analytical Klein-Nishina estimator, newly available in the MORSE code, allows a test of this method. In the MORSE calculation, particle scattering histories are calculated in the usual way, with scoring by an estimator routine at each collision site. Results for both the conventional P 3 estimator and the analytical estimator were obtained. In the DOT calculation, the source moments are expanded into the directional representation at each iteration. Optionally a sorting procedure removes all negatives, and removes enough small positive values to restore particle conservation. The effect of this is to replace the alternating positive and negative values with positive values of plausible magnitude. The accuracy of those values is examined herein

  19. Maths anxiety and medication dosage calculation errors: A scoping review.

    Science.gov (United States)

    Williams, Brett; Davis, Samantha

    2016-09-01

    A student's accuracy on drug calculation tests may be influenced by maths anxiety, which can impede one's ability to understand and complete mathematic problems. It is important for healthcare students to overcome this barrier when calculating drug dosages in order to avoid administering the incorrect dose to a patient when in the clinical setting. The aim of this study was to examine the effects of maths anxiety on healthcare students' ability to accurately calculate drug dosages by performing a scoping review of the existing literature. This review utilised a six-stage methodology using the following databases; CINAHL, Embase, Medline, Scopus, PsycINFO, Google Scholar, Trip database (http://www.tripdatabase.com/) and Grey Literature report (http://www.greylit.org/). After an initial title/abstract review of relevant papers, and then full text review of the remaining papers, six articles were selected for inclusion in this study. Of the six articles included, there were three experimental studies, two quantitative studies and one mixed method study. All studies addressed nursing students and the presence of maths anxiety. No relevant studies from other disciplines were identified in the existing literature. Three studies took place in the U.S, the remainder in Canada, Australia and United Kingdom. Upon analysis of these studies, four factors including maths anxiety were identified as having an influence on a student's drug dosage calculation abilities. Ultimately, the results from this review suggest more research is required in nursing and other relevant healthcare disciplines regarding the effects of maths anxiety on drug dosage calculations. This additional knowledge will be important to further inform development of strategies to decrease the potentially serious effects of errors in drug dosage calculation to patient safety. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Calculating potential error in sodium MRI with respect to the analysis of small objects.

    Science.gov (United States)

    Stobbe, Robert W; Beaulieu, Christian

    2018-06-01

    To facilitate correct interpretation of sodium MRI measurements, calculation of error with respect to rapid signal decay is introduced and combined with that of spatially correlated noise to assess volume-of-interest (VOI) 23 Na signal measurement inaccuracies, particularly for small objects. Noise and signal decay-related error calculations were verified using twisted projection imaging and a specially designed phantom with different sized spheres of constant elevated sodium concentration. As a demonstration, lesion signal measurement variation (5 multiple sclerosis participants) was compared with that predicted from calculation. Both theory and phantom experiment showed that VOI signal measurement in a large 10-mL, 314-voxel sphere was 20% less than expected on account of point-spread-function smearing when the VOI was drawn to include the full sphere. Volume-of-interest contraction reduced this error but increased noise-related error. Errors were even greater for smaller spheres (40-60% less than expected for a 0.35-mL, 11-voxel sphere). Image-intensity VOI measurements varied and increased with multiple sclerosis lesion size in a manner similar to that predicted from theory. Correlation suggests large underestimation of 23 Na signal in small lesions. Acquisition-specific measurement error calculation aids 23 Na MRI data analysis and highlights the limitations of current low-resolution methodologies. Magn Reson Med 79:2968-2977, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    Science.gov (United States)

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  2. Errors due to the cylindrical cell approximation in lattice calculations

    Energy Technology Data Exchange (ETDEWEB)

    Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1960-06-15

    It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)

  3. A dose error evaluation study for 4D dose calculations

    Science.gov (United States)

    Milz, Stefan; Wilkens, Jan J.; Ullrich, Wolfgang

    2014-10-01

    Previous studies have shown that respiration induced motion is not negligible for Stereotactic Body Radiation Therapy. The intrafractional breathing induced motion influences the delivered dose distribution on the underlying patient geometry such as the lung or the abdomen. If a static geometry is used, a planning process for these indications does not represent the entire dynamic process. The quality of a full 4D dose calculation approach depends on the dose coordinate transformation process between deformable geometries. This article provides an evaluation study that introduces an advanced method to verify the quality of numerical dose transformation generated by four different algorithms. The used transformation metric value is based on the deviation of the dose mass histogram (DMH) and the mean dose throughout dose transformation. The study compares the results of four algorithms. In general, two elementary approaches are used: dose mapping and energy transformation. Dose interpolation (DIM) and an advanced concept, so called divergent dose mapping model (dDMM), are used for dose mapping. The algorithms are compared to the basic energy transformation model (bETM) and the energy mass congruent mapping (EMCM). For evaluation 900 small sample regions of interest (ROI) are generated inside an exemplary lung geometry (4DCT). A homogeneous fluence distribution is assumed for dose calculation inside the ROIs. The dose transformations are performed with the four different algorithms. The study investigates the DMH-metric and the mean dose metric for different scenarios (voxel sizes: 8 mm, 4 mm, 2 mm, 1 mm 9 different breathing phases). dDMM achieves the best transformation accuracy in all measured test cases with 3-5% lower errors than the other models. The results of dDMM are reasonable and most efficient in this study, although the model is simple and easy to implement. The EMCM model also achieved suitable results, but the approach requires a more complex

  4. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  5. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  6. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  7. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    International Nuclear Information System (INIS)

    Stapleton, S; Zavgorodni, S; Popescu, I A; Beckham, W A

    2005-01-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot

  8. Implementation of random set-up errors in Monte Carlo calculated dynamic IMRT treatment plans

    Science.gov (United States)

    Stapleton, S.; Zavgorodni, S.; Popescu, I. A.; Beckham, W. A.

    2005-02-01

    The fluence-convolution method for incorporating random set-up errors (RSE) into the Monte Carlo treatment planning dose calculations was previously proposed by Beckham et al, and it was validated for open field radiotherapy treatments. This study confirms the applicability of the fluence-convolution method for dynamic intensity modulated radiotherapy (IMRT) dose calculations and evaluates the impact of set-up uncertainties on a clinical IMRT dose distribution. BEAMnrc and DOSXYZnrc codes were used for Monte Carlo calculations. A sliding window IMRT delivery was simulated using a dynamic multi-leaf collimator (DMLC) transport model developed by Keall et al. The dose distributions were benchmarked for dynamic IMRT fields using extended dose range (EDR) film, accumulating the dose from 16 subsequent fractions shifted randomly. Agreement of calculated and measured relative dose values was well within statistical uncertainty. A clinical seven field sliding window IMRT head and neck treatment was then simulated and the effects of random set-up errors (standard deviation of 2 mm) were evaluated. The dose-volume histograms calculated in the PTV with and without corrections for RSE showed only small differences indicating a reduction of the volume of high dose region due to set-up errors. As well, it showed that adequate coverage of the PTV was maintained when RSE was incorporated. Slice-by-slice comparison of the dose distributions revealed differences of up to 5.6%. The incorporation of set-up errors altered the position of the hot spot in the plan. This work demonstrated validity of implementation of the fluence-convolution method to dynamic IMRT Monte Carlo dose calculations. It also showed that accounting for the set-up errors could be essential for correct identification of the value and position of the hot spot.

  9. Dispersion relations in loop calculations

    International Nuclear Information System (INIS)

    Kniehl, B.A.

    1996-01-01

    These lecture notes give a pedagogical introduction to the use of dispersion relations in loop calculations. We first derive dispersion relations which allow us to recover the real part of a physical amplitude from the knowledge of its absorptive part along the branch cut. In perturbative calculations, the latter may be constructed by means of Cutkosky's rule, which is briefly discussed. For illustration, we apply this procedure at one loop to the photon vacuum-polarization function induced by leptons as well as to the γf anti-f vertex form factor generated by the exchange of a massive vector boson between the two fermion legs. We also show how the hadronic contribution to the photon vacuum polarization may be extracted from the total cross section of hadron production in e + e - annihilation measured as a function of energy. Finally, we outline the application of dispersive techniques at the two-loop level, considering as an example the bosonic decay width of a high-mass Higgs boson. (author)

  10. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  11. Error rate of automated calculation for wound surface area using a digital photography.

    Science.gov (United States)

    Yang, S; Park, J; Lee, H; Lee, J B; Lee, B U; Oh, B H

    2018-02-01

    Although measuring would size using digital photography is a quick and simple method to evaluate the skin wound, the possible compatibility of it has not been fully validated. To investigate the error rate of our newly developed wound surface area calculation using digital photography. Using a smartphone and a digital single lens reflex (DSLR) camera, four photographs of various sized wounds (diameter: 0.5-3.5 cm) were taken from the facial skin model in company with color patches. The quantitative values of wound areas were automatically calculated. The relative error (RE) of this method with regard to wound sizes and types of camera was analyzed. RE of individual calculated area was from 0.0329% (DSLR, diameter 1.0 cm) to 23.7166% (smartphone, diameter 2.0 cm). In spite of the correction of lens curvature, smartphone has significantly higher error rate than DSLR camera (3.9431±2.9772 vs 8.1303±4.8236). However, in cases of wound diameter below than 3 cm, REs of average values of four photographs were below than 5%. In addition, there was no difference in the average value of wound area taken by smartphone and DSLR camera in those cases. For the follow-up of small skin defect (diameter: <3 cm), our newly developed automated wound area calculation method is able to be applied to the plenty of photographs, and the average values of them are a relatively useful index of wound healing with acceptable error rate. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  13. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  14. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  15. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  16. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  17. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  18. A Relative View on Tracking Error

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried); I. Pouchkarev (Igor)

    2005-01-01

    textabstractWhen delegating an investment decisions to a professional manager, investors often anchor their mandate to a specific benchmark. The manager’s exposure to risk is controlled by means of a tracking error volatility constraint. It depends on market conditions whether this constraint is

  19. The Suppression of Energy Discretization Errors in Multigroup Transport Calculations

    International Nuclear Information System (INIS)

    Larsen, Edward

    2013-01-01

    The Objective of this project is to develop, implement, and test new deterministric methods to solve, as efficiently as possible, multigroup neutron transport problems having an extremely large number of groups. Our approach was to (i) use the standard CMFD method to 'coarsen' the space-angle grid, yielding a multigroup diffusion equation, and (ii) use a new multigrid-in-space-and-energy technique to efficiently solve the multigroup diffusion problem. The overall strategy of (i) how to coarsen the spatial an energy grids, and (ii) how to navigate through the various grids, has the goal of minimizing the overall computational effort. This approach yields not only the fine-grid solution, but also coarse-group flux-weighted cross sections that can be used for other related problems.

  20. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  1. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  2. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    Science.gov (United States)

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  3. Practical Calculation of Thermal Deformation and Manufacture Error uin Surface Grinding

    Institute of Scientific and Technical Information of China (English)

    周里群; 李玉平

    2002-01-01

    The paper submits a method to calculate thermal deformation and manufacture error in surface grinding.The author established a simplified temperature field model.and derived the thermal deformaiton of the ground workpiece,It is found that there exists not only a upwarp thermal deformation,but also a parallel expansion thermal deformation.A upwarp thermal deformation causes a concave shape error on the profile of the workpiece,and a parallel expansion thermal deformation causes a dimension error in height.The calculations of examples are given and compared with presented experiment data.

  4. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  5. Training errors and running related injuries

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Østergaard; Buist, Ida; Sørensen, Henrik

    2012-01-01

    The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries.......The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries....

  6. Neutron data error estimate of criticality calculations for lattice in shielding containers with metal fissionable materials

    International Nuclear Information System (INIS)

    Vasil'ev, A.P.; Krepkij, A.S.; Lukin, A.V.; Mikhal'kova, A.G.; Orlov, A.I.; Perezhogin, V.D.; Samojlova, L.Yu.; Sokolov, Yu.A.; Terekhin, V.A.; Chernukhin, Yu.I.

    1991-01-01

    Critical mass experiments were performed using assemblies which simulated one-dimensional lattice consisting of shielding containers with metal fissile materials. Calculations of the criticality of the above assemblies were carried out using the KLAN program with the BAS neutron constants. Errors in the calculations of the criticality for one-, two-, and three-dimensional lattices are estimated. 3 refs.; 1 tab

  7. On the calculation of errors and choice of the parameters of radioisotope following level meters

    International Nuclear Information System (INIS)

    Kalinin, O.V.; Matveev, V.S.; Khatskevich, M.V.

    1979-01-01

    A method for calculating errors of radioisotope following level meters is considered with account of nonlinearity of the system control units. The statistical method of analysis of linear control systems and the approximated method of statistical linearization of nonlinear systems are used during calculating error of a following level meter. Calculation of a nonlinear system by the method of statistical linearization comprises approximation of a nonlinear characteristic by linearized dependence on the base of a certain criterion. Dispersion calculations of output coordinate of a measuring converter are given for different cases of the system input signal. Dependences of fluctuation error on system parameters for level meters with proportional and relay control have been plotted on the base of the given methods. It is stated, that fluctuation error in both cases depend on time constant of a counting rate meter. Minimal error of the level meter decreases with the growth of operating counting rate and with the increase of nonsensitivity zone width. It is also noted, that parameters of the following level meter should be chosen according to requirements for measuring error, device reliability and time of reading fixing

  8. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  9. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  10. Characterization of model errors in the calculation of tangent heights for atmospheric infrared limb measurements

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2014-12-01

    Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.

  11. Relating physician's workload with errors during radiation therapy planning.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B

    2014-01-01

    To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.

  12. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  13. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Efficacy of surface error corrections to density functional theory calculations of vacancy formation energy in transition metals.

    Science.gov (United States)

    Nandi, Prithwish Kumar; Valsakumar, M C; Chandra, Sharat; Sahu, H K; Sundar, C S

    2010-09-01

    We calculate properties like equilibrium lattice parameter, bulk modulus and monovacancy formation energy for nickel (Ni), iron (Fe) and chromium (Cr) using Kohn-Sham density functional theory (DFT). We compare the relative performance of local density approximation (LDA) and generalized gradient approximation (GGA) for predicting such physical properties for these metals. We also make a relative study between two different flavors of GGA exchange correlation functional, namely PW91 and PBE. These calculations show that there is a discrepancy between DFT calculations and experimental data. In order to understand this discrepancy in the calculation of vacancy formation energy, we introduce a correction for the surface intrinsic error corresponding to an exchange correlation functional using the scheme implemented by Mattsson et al (2006 Phys. Rev. B 73 195123) and compare the effectiveness of the correction scheme for Al and the 3d transition metals.

  15. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  16. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  17. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    provide the user with programs to calculate and incorporate errors into sample size estimation.

  18. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  19. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  20. Calculation of atomic integrals using commutation relations

    International Nuclear Information System (INIS)

    Zamastil, J.; Vinette, F.; Simanek, M.

    2007-01-01

    In this paper, a numerically stable method of calculating atomic integrals is suggested. The commutation relations among the components of the angular momentum and the Runge-Lenz vector are used to deduce recurrence relations for the Sturmian radial functions. The radial part of the one- and two-electron integrals is evaluated by means of these recurrence relations. The product of two radial functions is written as a linear combination of the radial functions. This enables us to write the integrals over four radial functions as a linear combination of the integrals over two radial functions. The recurrence relations for the functions are used to derive the recursion relations for the coefficients of the linear combination and for the integrals over two functions

  1. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  2. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  3. Prevalence of refractive errors in the Slovak population calculated using the Gullstrand schematic eye model.

    Science.gov (United States)

    Popov, I; Valašková, J; Štefaničková, J; Krásnik, V

    2017-01-01

    A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.

  4. Calculation of the soft error rate of submicron CMOS logic circuits

    International Nuclear Information System (INIS)

    Juhnke, T.; Klar, H.

    1995-01-01

    A method to calculate the soft error rate (SER) of CMOS logic circuits with dynamic pipeline registers is described. This method takes into account charge collection by drift and diffusion. The method is verified by comparison of calculated SER's to measurement results. Using this method, the SER of a highly pipelined multiplier is calculated as a function of supply voltage for a 0.6 microm, 0.3 microm, and 0.12 microm technology, respectively. It has been found that the SER of such highly pipelined submicron CMOS circuits may become too high so that countermeasures have to be taken. Since the SER greatly increases with decreasing supply voltage, low-power/low-voltage circuits may show more than eight times the SER for half the normal supply voltage as compared to conventional designs

  5. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  6. A lower bound on the relative error of mixed-state cloning and related operations

    International Nuclear Information System (INIS)

    Rastegin, A E

    2003-01-01

    We extend the concept of the relative error to mixed-state cloning and related physical operations, in which the ancilla contains some information a priori about the input state. The lower bound on the relative error is obtained. It is shown that this result provides further support for a stronger no-cloning theorem

  7. Agriculture-related radiation dose calculations

    International Nuclear Information System (INIS)

    Furr, J.M.; Mayberry, J.J.; Waite, D.A.

    1987-10-01

    Estimates of radiation dose to the public must be made at each stage in the identification and qualification process leading to siting a high-level nuclear waste repository. Specifically considering the ingestion pathway, this paper examines questions of reliability and adequacy of dose calculations in relation to five stages of data availability (geologic province, region, area, location, and mass balance) and three methods of calculation (population, population/food production, and food production driven). Calculations were done using the model PABLM with data for the Permian and Palo Duro Basins and the Deaf Smith County area. Extra effort expended in gathering agricultural data at succeeding environmental characterization levels does not appear justified, since dose estimates do not differ greatly; that effort would be better spent determining usage of food types that contribute most to the total dose; and that consumption rate and the air dispersion factor are critical to assessment of radiation dose via the ingestion pathway. 17 refs., 9 figs., 32 tabs

  8. Errors in the calculation of sub-soil moisture probe by equivalent moisture content technique

    International Nuclear Information System (INIS)

    Lakshmipathy, A.V.; Gangadharan, P.

    1982-01-01

    The size of the soil sample required to obtain the saturation response, with a neutron moisture probe is quite large and this poses practical problems of handling and mixing large amounts of samples for absolute laboratory calibration. Hydrogenous materials are used as a substitute for water in the equivalent moisture content technique, for calibration of soil moisture probes. In this it is assumed that only hydrogen of the bulk sample is responsible for the slowing down of fast neutrons and the slow neutron countrate is correlated to equivalent water content by considering the hydrogen density of sample. It is observed that the higher atomic number elements present in water equivalent media also affect the response of the soil moisture probe. Hence calculations, as well as experiments, were undertaken to know the order of error introduced by this technique. The thermal and slow neutron flux distribution around the BF 3 counter of a sub-soil moisture probe is calculated using three group diffusion theory. The response of the probe corresponding to different equivalent moisture content of hydrogenous media, is calculated taking into consideration the effective length of BF 3 counter. Soil with hydrogenous media such as polyethylene, sugar and water are considered for calculation, to verify the suitability of these materials as substitute for water during calibration of soil moisture probe. Experiments were conducted, to verify the theoretically calculated values. (author)

  9. Analysis of causes and effects errors in calculation of rolling slewing bearings capacity

    Directory of Open Access Journals (Sweden)

    Marek Krynke

    2016-09-01

    Full Text Available In the paper the basic design features and essential assumption of calculation models as well as the factors influencing quality improvement and improvement of calculation process of bearing capacity of rolling slewing bearings are discussed. The aim of conducted research is the identification and elimination of sources of errors in determining the characteristics of slewing bearing capacity. The result of the research aims atdeterminingthe risk of making mistakes and specifying tips for designers of slewing bearings. It is shown that there is a necessity fora numerical method to be applied and that real conditions of bearing work must necessarily be taken into account e.g. carrying structure deformations as the first ones.

  10. Refractive error magnitude and variability: Relation to age.

    Science.gov (United States)

    Irving, Elizabeth L; Machan, Carolyn M; Lam, Sharon; Hrynchak, Patricia K; Lillakas, Linda

    2018-03-19

    To investigate mean ocular refraction (MOR) and astigmatism, over the human age range and compare severity of refractive error to earlier studies from clinical populations having large age ranges. For this descriptive study patient age, refractive error and history of surgery affecting refraction were abstracted from the Waterloo Eye Study database (WatES). Average MOR, standard deviation of MOR and astigmatism were assessed in relation to age. Refractive distributions for developmental age groups were determined. MOR standard deviation relative to average MOR was evaluated. Data from earlier clinically based studies with similar age ranges were compared to WatES. Right eye refractive errors were available for 5933 patients with no history of surgery affecting refraction. Average MOR varied with age. Children <1 yr of age were the most hyperopic (+1.79D) and the highest magnitude of myopia was found at 27yrs (-2.86D). MOR distributions were leptokurtic, and negatively skewed. The mode varied with age group. MOR variability increased with increasing myopia. Average astigmatism increased gradually to age 60 after which it increased at a faster rate. By 85+ years it was 1.25D. J 0 power vector became increasingly negative with age. J 45 power vector values remained close to zero but variability increased at approximately 70 years. In relation to comparable earlier studies, WatES data were most myopic. Mean ocular refraction and refractive error distribution vary with age. The highest magnitude of myopia is found in young adults. Similar to prevalence, the severity of myopia also appears to have increased since 1931. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  11. Amplitude of Accommodation and its Relation to Refractive Errors

    Directory of Open Access Journals (Sweden)

    Abraham Lekha

    2005-01-01

    Full Text Available Aims: To evaluate the relationship between amplitude of accommodation and refractive errors in the peri-presbyopic age group. Materials and Methods: Three hundred and sixteen right eyes of 316 consecutive patients in the age group 35-50 years who attended our outpatient clinic were studied. Emmetropes, hypermetropes and myopes with best-corrected visual acuity of 6/6 J1 in both eyes were included. The amplitude of accommodation (AA was calculated by measuring the near point of accommodation (NPA. In patients with more than ± 2 diopter sphere correction for distance, the NPA was also measured using appropriate soft contact lenses. Results: There was a statistically significant difference in AA between myopes and hypermetropes ( P P P P P P >0.5. Conclusion: Our study showed higher amplitude of accommodation among myopes between 35 and 44 years compared to emmetropes and hypermetropes

  12. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  13. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    International Nuclear Information System (INIS)

    Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)

  14. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587

  15. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  16. Calculation error of collective effective dose of external exposure during works at 'Shelter' object

    International Nuclear Information System (INIS)

    Batij, V.G.; Derengovskij, V.V.; Kochnev, N.A.; Sizov, A.A.

    2001-01-01

    Collective effective dose (CED) error assessment is the most important task for optimal planning of works in the 'Shelter' object conditions. The main components of CED error are as follows: error in transient factor determination from exposition dose to equivalent dose; error in working hours determination in 'Shelter' object conditions; error in determination of dose rate at workplaces; additional CED error introduced by shielding of workplaces

  17. Improvements in the error calculation of the action of a kicked beam

    CERN Document Server

    Sherman, Alexander Charles

    2013-01-01

    This report details a new calculation for the action performed in the optics measurement and correction software. The action of a kicked beam is used to calculate the dynamic aperture and detuning with amplitude. The current method of calculation has a large uncertainty due to the use of all BPMs (including those near interaction points and ones which are malfunctioning) and the model beta function. Instead, only good BPMs are kept and the measured beta function from phase is used, and significant decreases are seen in the relative uncertainty of the action.

  18. Statistical evaluation of design-error related nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1981-01-01

    In this paper, general methodology for the statistical evaluation of design-error related accidents is proposed that can be applied to a variety of systems that evolves during the development of large-scale technologies. The evaluation aims at an estimate of the combined ''residual'' frequency of yet unknown types of accidents ''lurking'' in a certain technological system. A special categorization in incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of U.S. nuclear power reactor technology, considering serious accidents (category 2 events) that involved, in the accident progression, a particular design inadequacy. 9 refs

  19. SU-F-T-381: Fast Calculation of Three-Dimensional Dose Considering MLC Leaf Positional Errors for VMAT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Katsuta, Y [Takeda General Hospital, Aizuwakamatsu City, Fukushima (Japan); Tohoku University Graduate School of Medicine, Sendal, Miyagi (Japan); Kadoya, N; Jingu, K [Tohoku University Graduate School of Medicine, Sendal, Miyagi (Japan); Shimizu, E; Majima, K [Takeda General Hospital, Aizuwakamatsu City, Fukushima (Japan)

    2016-06-15

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dose calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.

  20. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Science.gov (United States)

    Esselborn, Saskia; Rudenko, Sergei; Schöne, Tilo

    2018-03-01

    Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year), and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), and the Goddard Space Flight Center (GSFC). The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability) with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr-1 (27 % of the corresponding sea level variability) and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr-1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test orbits calculated at GFZ, the sources of the

  1. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Directory of Open Access Journals (Sweden)

    S. Esselborn

    2018-03-01

    orbits calculated at GFZ, the sources of the observed orbit-related errors are further investigated. The main contributors on all timescales are uncertainties in Earth's time-variable gravity field models and on annual to interannual timescales discrepancies of the tracking station subnetworks, i.e. satellite laser ranging (SLR and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS.

  2. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  3. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  4. Errors in the calculation of new salary positions and performance premiums – 2017 MERIT exercise

    CERN Multimedia

    Staff Association

    2017-01-01

    Following the receipt of the letters dated May 12th announcing the qualification of their performance (MERIT 2017), and the notification of their salary slips for the month of May, several colleagues have come to us to enquire about the calculation of salary increases and performance premiums. After verification, the Staff Association has informed the Management, in a meeting of the Standing Concertation Committee on June 1st, about errors owing to rounding in the applied formulas. James Purvis, Head of HR department, has published in the CERN Bulletin dated July 18th an article, under the heading “Better precision (rounding)”, that gives a short explanation of these rounding effects. But we want to further bring you more precise explanations. Advancement On the salary slips for the month of May, the calculations of the advancement and new salary positions were done, by the services of administrative computing in the FAP department, on the basis of the salary, rounded to the nearest franc...

  5. Error-related negativities during spelling judgments expose orthographic knowledge.

    Science.gov (United States)

    Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin

    2014-02-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Statistical evaluation of design-error related accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy

  7. Error Analysis of Relative Calibration for RCS Measurement on Ground Plane Range

    Directory of Open Access Journals (Sweden)

    Wu Peng-fei

    2012-03-01

    Full Text Available Ground plane range is a kind of outdoor Radar Cross Section (RCS test range used for static measurement of full-size or scaled targets. Starting from the characteristics of ground plane range, the impact of environments on targets and calibrators is analyzed during calibration in the RCS measurements. The error of relative calibration produced by the different illumination of target and calibrator is studied. The relative calibration technique used in ground plane range is to place the calibrator on a fixed and auxiliary pylon somewhere between the radar and the target under test. By considering the effect of ground reflection and antenna pattern, the relationship between the magnitude of echoes and the position of calibrator is discussed. According to the different distances between the calibrator and target, the difference between free space and ground plane range is studied and the error of relative calibration is calculated. Numerical simulation results are presented with useful conclusions. The relative calibration error varies with the position of calibrator, frequency and antenna beam width. In most case, set calibrator close to the target may keep the error under control.

  8. Treatment Planning System Calculation Errors Are Present in Most Imaging and Radiation Oncology Core-Houston Phantom Failures.

    Science.gov (United States)

    Kerns, James R; Stingo, Francesco; Followill, David S; Howell, Rebecca M; Melancon, Adam; Kry, Stephen F

    2017-08-01

    The anthropomorphic phantom program at the Houston branch of the Imaging and Radiation Oncology Core (IROC-Houston) is an end-to-end test that can be used to determine whether an institution can accurately model, calculate, and deliver an intensity modulated radiation therapy dose distribution. Currently, institutions that do not meet IROC-Houston's criteria have no specific information with which to identify and correct problems. In the present study, an independent recalculation system was developed to identify treatment planning system (TPS) calculation errors. A recalculation system was commissioned and customized using IROC-Houston measurement reference dosimetry data for common linear accelerator classes. Using this system, 259 head and neck phantom irradiations were recalculated. Both the recalculation and the institution's TPS calculation were compared with the delivered dose that was measured. In cases in which the recalculation was statistically more accurate by 2% on average or 3% at a single measurement location than was the institution's TPS, the irradiation was flagged as having a "considerable" institutional calculation error. The error rates were also examined according to the linear accelerator vendor and delivery technique. Surprisingly, on average, the reference recalculation system had better accuracy than the institution's TPS. Considerable TPS errors were found in 17% (n=45) of the head and neck irradiations. Also, 68% (n=13) of the irradiations that failed to meet the IROC-Houston criteria were found to have calculation errors. Nearly 1 in 5 institutions were found to have TPS errors in their intensity modulated radiation therapy calculations, highlighting the need for careful beam modeling and calculation in the TPS. An independent recalculation system can help identify the presence of TPS errors and pass on the knowledge to the institution. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A method for local transport analysis in tokamaks with error calculation

    International Nuclear Information System (INIS)

    Hogeweij, G.M.D.; Hordosy, G.; Lopes Cardozo, N.J.

    1989-01-01

    Global transport studies have revealed that heat transport in a tokamak is anomalous, but cannot provide information about the nature of the anomaly. Therefore, local transport analysis is essential for the study of anomalous transport. However, the determination of local transport coefficients is not a trivial affair. Generally speaking one can either directly measure the heat diffusivity, χ, by means of heat pulse propagation analysis, or deduce the profile of χ from measurements of the profiles of the temperature, T, and the power deposition. Here we are concerned only with the latter method, the local power balance analysis. For the sake of clarity heat diffusion only is considered: ρ=-gradT/q (1) where ρ=κ -1 =(nχ) -1 is the heat resistivity and q is the heat flux per unit area. It is assumed that the profiles T(r) and q(r) are given with some experimental error. In practice T(r) is measured directly, e.g. from ECE spectroscopy, while q(r) is deduced from the power deposition and loss profiles. The latter cannot be measured directly and is partly determined on the basis of models. This complication will not be considered here. Since in eq. (1) the gradient of T appears, noise on T can severely affect the solution ρ. This means that in general some form of smoothing must be applied. A criterion is needed to select the optimal smoothing. Too much smoothing will wipe out the details, whereas with too little smoothing the noise will distort the reconstructed profile of ρ. Here a new method to solve eq. (1) is presented which expresses ρ(r) as a cosine-series. The coefficients of this series are given as linear combinations of the Fourier coefficients of the measured T- and q-profiles. This formulation allows 1) the stable and accurate calculation of the ρ-profile, and 2) the analytical calculation of the error in this profile. (author) 5 refs., 3 figs

  10. Calculation and simulation on mid-spatial frequency error in continuous polishing

    International Nuclear Information System (INIS)

    Xie Lei; Zhang Yunfan; You Yunfeng; Ma Ping; Liu Yibin; Yan Dingyao

    2013-01-01

    Based on theoretical model of continuous polishing, the influence of processing parameters on the polishing result was discussed. Possible causes of mid-spatial frequency error in the process were analyzed. The simulation results demonstrated that the low spatial frequency error was mainly caused by large rotating ratio. The mid-spatial frequency error would decrease as the low spatial frequency error became lower. The regular groove shape was the primary reason of the mid-spatial frequency error. When irregular and fitful grooves were adopted, the mid-spatial frequency error could be lessened. Moreover, the workpiece swing could make the polishing process more uniform and reduce the mid-spatial frequency error caused by the fix-eccentric plane polishing. (authors)

  11. Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism

    Science.gov (United States)

    Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.

    2018-05-01

    The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.

  12. Base data for looking-up tables of calculation errors in JACS code system

    International Nuclear Information System (INIS)

    Murazaki, Minoru; Okuno, Hiroshi

    1999-03-01

    The report intends to clarify the base data for the looking-up tables of calculation errors cited in 'Nuclear Criticality Safety Handbook'. The tables were obtained by classifying the benchmarks made by JACS code system, and there are two kinds: One kind is for fuel systems in general geometry with a reflected and another kind is for fuel systems specific to simple geometry with a reflector. Benchmark systems were further categorized into eight groups according to the fuel configuration: homogeneous or heterogeneous; and fuel kind: uranium, plutonium and their mixtures, etc. The base data for fuel systems in general geometry with a reflected are summarized in this report for the first time. The base data for fuel systems in simple geometry with a reflector were summarized in a technical report published in 1987. However, the data in a group named homogeneous low-enriched uranium were further selected out later by the working group for making the Nuclear Criticality Safety Handbook. This report includes the selection. As a project has been organized by OECD/NEA for evaluation of criticality safety benchmark experiments, the results are also described. (author)

  13. Calculation errors of Set-up in patients with tumor location of prostate. Exploratory study; Calculo de errores de Set-up en pacientes con localizacion tumoral de prostata. Estudio exploratorio

    Energy Technology Data Exchange (ETDEWEB)

    Donis Gil, S.; Robayna Duque, B. E.; Jimenez Sosa, A.; Hernandez Armas, O.; Gonzalez Martin, A. E.; Hernandez Armas, J.

    2013-07-01

    The calculation of SM is done from errors in positioning (set-up). These errors are calculated from movements in 3D of the patient. This paper is an exploratory study of 20 patients with tumor location of prostate in which errors of set-up for two protocols of work are evaluated. (Author)

  14. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  15. Calculating method on human error probabilities considering influence of management and organization

    International Nuclear Information System (INIS)

    Gao Jia; Huang Xiangrui; Shen Zupei

    1996-01-01

    This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach

  16. The calculation of average error probability in a digital fibre optical communication system

    Science.gov (United States)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  17. Accuracy requirements for the calculation of gravitational waveforms from coalescing compact binaries in numerical relativity

    International Nuclear Information System (INIS)

    Miller, Mark

    2005-01-01

    I discuss the accuracy requirements on numerical relativity calculations of inspiraling compact object binaries whose extracted gravitational waveforms are to be used as templates for matched filtering signal extraction and physical parameter estimation in modern interferometric gravitational wave detectors. Using a post-Newtonian point particle model for the premerger phase of the binary inspiral, I calculate the maximum allowable errors for the mass and relative velocity and positions of the binary during numerical simulations of the binary inspiral. These maximum allowable errors are compared to the errors of state-of-the-art numerical simulations of multiple-orbit binary neutron star calculations in full general relativity, and are found to be smaller by several orders of magnitude. A post-Newtonian model for the error of these numerical simulations suggests that adaptive mesh refinement coupled with second-order accurate finite difference codes will not be able to robustly obtain the accuracy required for reliable gravitational wave extraction on Terabyte-scale computers. I conclude that higher-order methods (higher-order finite difference methods and/or spectral methods) combined with adaptive mesh refinement and/or multipatch technology will be needed for robustly accurate gravitational wave extraction from numerical relativity calculations of binary coalescence scenarios

  18. (AJST) RELATIVE EFFICIENCY OF NON-PARAMETRIC ERROR ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    on 100 bootstrap samples, a sample of size n being taken with replacement in each initial sample of size n. .... the overlap (or optimal error rate) of the populations. However, the expression (2.3) for the computation of ..... Analysis and Machine Intelligence, 9, 628-633. Lachenbruch P. A. (1967). An almost unbiased method ...

  19. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    Science.gov (United States)

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  20. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  1. Calculations of magnetic field errors caused by mechanical accuracy at infra-red undulator construction

    International Nuclear Information System (INIS)

    Matyushevskij, E.A.; Morozov, N.A.; Syresin, E.M.

    2005-01-01

    At the Joint Institute for Nuclear Research (Dubna) the electromagnetic undulator with maximal magnetic field 1.2 T and 40 cm period is under development. The computer models for the undulator magnet system were realized on the basis of POISSON and RADIA codes. The undulator magnetic field imperfections due to the design errors were simulated by the models

  2. On the symmetric α-stable distribution with application to symbol error rate calculations

    KAUST Repository

    Soury, Hamza

    2016-12-24

    The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.

  3. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  4. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  5. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  6. Boundary integral method to calculate the sensitivity temperature error of microstructured fibre plasmonic sensors

    International Nuclear Information System (INIS)

    Esmaeilzadeh, Hamid; Arzi, Ezatollah; Légaré, François; Hassani, Alireza

    2013-01-01

    In this paper, using the boundary integral method (BIM), we simulate the effect of temperature fluctuation on the sensitivity of microstructured optical fibre (MOF) surface plasmon resonance (SPR) sensors. The final results indicate that, as the temperature increases, the refractometry sensitivity of our sensor decreases from 1300 nm/RIU at 0 °C to 1200 nm/RIU at 50 °C, leading to ∼7.7% sensitivity reduction and the sensitivity temperature error of 0.15% °C −1 for this case. These results can be used for biosensing temperature-error adjustment in MOF SPR sensors, since biomaterials detection usually happens in this temperature range. Moreover, the signal-to-noise ratio (SNR) of our sensor decreases from 0.265 at 0 °C to 0.154 at 100 °C with the average reduction rate of ∼0.42% °C −1 . The results suggest that at lower temperatures the sensor has a higher SNR. (paper)

  7. Calculational analysis of errors for various models of an experiment on measuring leakage neutron spectra

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.; Deeva, V.V.; Prokof'eva, Z.A.

    1990-01-01

    Analysis is made for the effect of mathematical model accuracy of the system concerned on the calculation results using the BRAND program system. Consideration is given to the impact of the following factors: accuracy of neutron source energy-angular characteristics description, various degrees of system geometry approximation, adequacy of Monte-Carlo method estimation to a real physical neutron detector. The calculation results analysis is made on the basis of the experiments on leakage neutron spectra measurement in spherical lead assemblies with the 14 MeV-neutron source in the centre. 4 refs.; 2 figs.; 10 tabs

  8. Analyse des erreurs dans les calculs sur ordinateurs Error Analysis in Computing

    Directory of Open Access Journals (Sweden)

    Vignes J.

    2006-11-01

    Full Text Available La méthode présentée ici permet d'évaluer l'erreur sur les résultats d'algorithmes, erreurs dues à l'arithmétique à précision limitée de la machines L'idée de base de cette méthode est qu'à un algorithme donné fournissant un résultat algébrique unique r, correspond en informatique un ensemble R de résultats numériques qui sont tous représentatifs de résultat exact r. La méthode de permutation-perturbation que nous présentons ici permet d'obtenir les éléments de R. La perturbation agit sur les données et résultats de chaque opération élémentaire. La permutation agit sur l'ordre d'exécution des opérations. Une étude statistique des éléments de R permet d'estimer l'erreur commise. Dans la pratique, il suffit de 2 ou 3 éléments de R pour connaître cette erreur. This paper describes a new method for evaluating the error in the results of computation of an algorithm. The basic idea underlying the method is that while in algebra a given algorithm provides a single result r, this same algorithm carried out on a computer provides a set R of numerical results that are ail representative of the exact algebraic result r. The permutation-perturbation method described here can be used to obtain the elements of R. The perturbation acts on the data and results of each elementary operation, and the permutation acts on the order in which operations are carried out. A statistical analysis of the elements of R is performed to determine the error committed. In practice, 2 to 4 R elements are sufficient for determining the error.

  9. Reducing Systematic Errors in Oxide Species with Density Functional Theory Calculations

    DEFF Research Database (Denmark)

    Christensen, Rune; Hummelshøj, Jens S.; Hansen, Heine Anton

    2015-01-01

    Density functional theory calculations can be used to gain valuable insight into the fundamental reaction processes in metal−oxygen systems, e.g., metal−oxygen batteries. Here, the ability of a range of different exchange-correlation functionals to reproduce experimental enthalpies of formation...

  10. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  11. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  12. Running Records and First Grade English Learners: An Analysis of Language Related Errors

    Science.gov (United States)

    Briceño, Allison; Klein, Adria F.

    2018-01-01

    The purpose of this study was to determine if first-grade English Learners made patterns of language related errors when reading, and if so, to identify those patterns and how teachers coded language related errors when analyzing English Learners' running records. Using research from the fields of both literacy and Second Language Acquisition, we…

  13. Calculation of stochastic broadening due to noise and field errors in the simple map in action-angle coordinates

    Science.gov (United States)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.

  14. Uncertainty of decay heat calculations originating from errors in the nuclear data and the yields of individual fission products

    International Nuclear Information System (INIS)

    Rudstam, G.

    1979-01-01

    The calculation of the abundance pattern of the fission products with due account taken of feeding from the fission of 235 U, 238 U, and 239 Pu, from the decay of parent nuclei, from neutron capture, and from delayed-neutron emission is described. By means of the abundances and the average beta and gamma energies the decay heat in nuclear fuel is evaluated along with its error derived from the uncertainties of fission yields and nuclear properties of the inddividual fission products. (author)

  15. Error-Related Activity and Correlates of Grammatical Plasticity

    Science.gov (United States)

    Davidson, Doug J.; Indefrey, Peter

    2011-01-01

    Cognitive control involves not only the ability to manage competing task demands, but also the ability to adapt task performance during learning. This study investigated how violation-, response-, and feedback-related electrophysiological (EEG) activity changes over time during language learning. Twenty-two Dutch learners of German classified short prepositional phrases presented serially as text. The phrases were initially presented without feedback during a pre-test phase, and then with feedback in a training phase on two separate days spaced 1 week apart. The stimuli included grammatically correct phrases, as well as grammatical violations of gender and declension. Without feedback, participants’ classification was near chance and did not improve over trials. During training with feedback, behavioral classification improved and violation responses appeared to both types of violation in the form of a P600. Feedback-related negative and positive components were also present from the first day of training. The results show changes in the electrophysiological responses in concert with improving behavioral discrimination, suggesting that the activity is related to grammar learning. PMID:21960979

  16. [Event-related EEG potentials associated with error detection in psychiatric disorder: literature review].

    Science.gov (United States)

    Balogh, Lívia; Czobor, Pál

    2010-01-01

    Error-related bioelectric signals constitute a special subgroup of event-related potentials. Researchers have identified two evoked potential components to be closely related to error processing, namely error-related negativity (ERN) and error-positivity (Pe), and they linked these to specific cognitive functions. In our article first we give a brief description of these components, then based on the available literature, we review differences in error-related evoked potentials observed in patients across psychiatric disorders. The PubMed and Medline search engines were used in order to identify all relevant articles, published between 2000 and 2009. For the purpose of the current paper we reviewed publications summarizing results of clinical trials. Patients suffering from schizophrenia, anorexia nervosa or borderline personality disorder exhibited a decrease in the amplitude of error-negativity when compared with healthy controls, while in cases of depression and anxiety an increase in the amplitude has been observed. Some of the articles suggest specific personality variables, such as impulsivity, perfectionism, negative emotions or sensitivity to punishment to underlie these electrophysiological differences. Research in the field of error-related electric activity has come to the focus of psychiatry research only recently, thus the amount of available data is significantly limited. However, since this is a relatively new field of research, the results available at present are noteworthy and promising for future electrophysiological investigations in psychiatric disorders.

  17. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....

  18. Propagation of errors from a null balance terahertz reflectometer to a sample's relative water content

    International Nuclear Information System (INIS)

    Hadjiloucas, S; Walker, G C; Bowen, J W; Zafiropoulos, A

    2009-01-01

    The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.

  19. Relating faults in diagnostic reasoning with diagnostic errors and patient harm.

    NARCIS (Netherlands)

    Zwaan, L.; Thijs, A.; Wagner, C.; Wal, G. van der; Timmermans, D.R.M.

    2012-01-01

    Purpose: The relationship between faults in diagnostic reasoning, diagnostic errors, and patient harm has hardly been studied. This study examined suboptimal cognitive acts (SCAs; i.e., faults in diagnostic reasoning), related them to the occurrence of diagnostic errors and patient harm, and studied

  20. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  1. Errores innatos del metabolismo de las purinas y otras enfermedades relacionadas Inborn purine metabolism errors and other related diseases

    Directory of Open Access Journals (Sweden)

    Jiovanna Contreras Roura

    2012-06-01

    growth, recurrent infections, self-mutilation, immunodeficiencies, unexplainable haemolytic anemia, gout-related arthritis, family history, consanguinity and adverse reactions to those drugs that are analogous of purines. The study of these diseases generally begins by quantifying serum uric acid and uric acid present in the urine which is the final product of purine metabolism in human beings. Diet and drug consumption are among the pathological, physiological and clinical conditions capable of changing the level of this compound. This review was intended to disseminate information on the inborn purine metabolism errors as well as to facilitate the interpretation of the uric acid levels and other biochemical markers making the diagnosis of these diseases possible. The tables relating these diseases to the excretory levels of uric acid and other biochemical markers, the altered enzymes, the clinical symptoms, the model of inheritance, and in some cases, the suggested treatment. This paper allowed us to affirm that variations in the uric acid levels and the presence of other biochemical markers in urine are important tools in screening some inborn purine metabolism errors, and also other related pathological conditions.

  2. Making related errors facilitates learning, but learners do not know it.

    Science.gov (United States)

    Huelser, Barbie J; Metcalfe, Janet

    2012-05-01

    Producing an error, so long as it is followed by corrective feedback, has been shown to result in better retention of the correct answers than does simply studying the correct answers from the outset. The reasons for this surprising finding, however, have not been investigated. Our hypothesis was that the effect might occur only when the errors produced were related to the targeted correct response. In Experiment 1, participants studied either related or unrelated word pairs, manipulated between participants. Participants either were given the cue and target to study for 5 or 10 s or generated an error in response to the cue for the first 5 s before receiving the correct answer for the final 5 s. When the cues and targets were related, error-generation led to the highest correct retention. However, consistent with the hypothesis, no benefit was derived from generating an error when the cue and target were unrelated. Latent semantic analysis revealed that the errors generated in the related condition were related to the target, whereas they were not related to the target in the unrelated condition. Experiment 2 replicated these findings in a within-participants design. We found, additionally, that people did not know that generating an error enhanced memory, even after they had just completed the task that produced substantial benefits.

  3. Analysis of the methodical component of core power density field calculation error on the basis of Mochovce-1 commissioning tests

    International Nuclear Information System (INIS)

    Brik, A.

    2009-01-01

    In the first decade of June 2008, during the power commissioning of the reactor at the Mochovce NPP unit 1, the experiment with reducing the thermal power of core almost to the balance-of-plant (BOP) needs was performed. After the reactor has operated for seven hours at low power (about 200 220 MW (thermal)), its power was increased (at a rate of about 0.25% of N nom /min) to the initial level, close to 107% (1471 MW). During the experiment, core parameters, which were subsequently used for comparing the measured data with the results of experiment simulation calculations, were recorded in the reactor in-core monitoring system database. Calculated and measured levels of critical concentrations of boric acid were compared, along with power density distributions by fuel elements and assemblies obtained both by the KRUIZ in-core monitoring system and on the basis of calculations simulating reactor operation in accordance with the given core power variation schedule. The final stage consisted of assessing the methodical component of power density micro- and macro-fields calculation error in the core of Mochovce-1 reactor operating with varying load. (author)

  4. Analysis of the methodical component of core power density field calculation error on the basis of Mochovce-1 commissioning tests

    International Nuclear Information System (INIS)

    Brik, A.

    2009-01-01

    In the first decade of June 2008, during the power commissioning of the reactor at Mochovce NPP unit 1, the experiment with reducing the thermal power of core almost to the balance-of-plant needs was performed. After the reactor has operated for seven hours at low power (about 200 220 MW (thermal)), its power was increased (at a rate of about 0.25% of N nom /min) to the initial level, close to 107% (1471 MW). During the experiment, core parameters, which were subsequently used for comparing the measured data with the results of experiment simulation calculations, were recorded in the reactor in-core monitoring system's database. Calculated and measured levels of critical concentrations of boric acid were compared, along with power density distributions by fuel elements and assemblies obtained both by the KRUIZ in-core monitoring system and on the basis of calculations simulating reactor operation in accordance with the given core power variation schedule. The final stage consisted of assessing the methodical component of power density micro- and macro-fields' calculation error in the core of Mochovce-1 reactor operating with varying load. (Authors)

  5. An individual differences approach to multiple-target visual search errors: How search errors relate to different characteristics of attention.

    Science.gov (United States)

    Adamo, Stephen H; Cain, Matthew S; Mitroff, Stephen R

    2017-12-01

    A persistent problem in visual search is that searchers are more likely to miss a target if they have already found another in the same display. This phenomenon, the Subsequent Search Miss (SSM) effect, has remained despite being a known issue for decades. Increasingly, evidence supports a resource depletion account of SSM errors-a previously detected target consumes attentional resources leaving fewer resources available for the processing of a second target. However, "attention" is broadly defined and is composed of many different characteristics, leaving considerable uncertainty about how attention affects second-target detection. The goal of the current study was to identify which attentional characteristics (i.e., selection, limited capacity, modulation, and vigilance) related to second-target misses. The current study compared second-target misses to an attentional blink task and a vigilance task, which both have established measures that were used to operationally define each of four attentional characteristics. Second-target misses in the multiple-target search were correlated with (1) a measure of the time it took for the second target to recovery from the blink in the attentional blink task (i.e., modulation), and (2) target sensitivity (d') in the vigilance task (i.e., vigilance). Participants with longer recovery and poorer vigilance had more second-target misses in the multiple-target visual search task. The results add further support to a resource depletion account of SSM errors and highlight that worse modulation and poor vigilance reflect a deficit in attentional resources that can account for SSM errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. The role of hand of error and stimulus orientation in the relationship between worry and error-related brain activity: Implications for theory and practice.

    Science.gov (United States)

    Lin, Yanli; Moran, Tim P; Schroder, Hans S; Moser, Jason S

    2015-10-01

    Anxious apprehension/worry is associated with exaggerated error monitoring; however, the precise mechanisms underlying this relationship remain unclear. The current study tested the hypothesis that the worry-error monitoring relationship involves left-lateralized linguistic brain activity by examining the relationship between worry and error monitoring, indexed by the error-related negativity (ERN), as a function of hand of error (Experiment 1) and stimulus orientation (Experiment 2). Results revealed that worry was exclusively related to the ERN on right-handed errors committed by the linguistically dominant left hemisphere. Moreover, the right-hand ERN-worry relationship emerged only when stimuli were presented horizontally (known to activate verbal processes) but not vertically. Together, these findings suggest that the worry-ERN relationship involves left hemisphere verbal processing, elucidating a potential mechanism to explain error monitoring abnormalities in anxiety. Implications for theory and practice are discussed. © 2015 Society for Psychophysiological Research.

  7. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  8. Calculation and analysis of thermodynamic relations for superconductors

    International Nuclear Information System (INIS)

    Nazarenko, A.B.

    1989-01-01

    The absorption coefficients of high-frequency and low-frequency sound have been calculated on the basis of the Ginzburg-Landau theory. This sound is a wave of periodic adiabatic bulk compressions and rarefactions of the frequency ω in an isotropic superconductor near the transition temperature. Thermodynamic relations have been obtained for abrupt changes in the physical quantities produced as a result of a transition from the normal state to the superconducting state. These relations are similar to the Ehrenfest relations. The above--mentioned thermodynamic quantities are compared with the published experimental results on YBa 2 Cu 3 O 7-δ . The experiments on the absorption of ultrasound in recently discovered superconductors mainformation on the phase transition type and thermodynamic relations for these superconductors, in particular, the T c -vs-dp curve. Similar calculations have been carried out for 2 He-transition experiments with ferromagnetic materials. The order parameter in the thermodynamic potential was assumed to be isotropic

  9. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  10. Error-related brain activity predicts cocaine use after treatment at 3-month follow-up.

    Science.gov (United States)

    Marhe, Reshmi; van de Wetering, Ben J M; Franken, Ingmar H A

    2013-04-15

    Relapse after treatment is one of the most important problems in drug dependency. Several studies suggest that lack of cognitive control is one of the causes of relapse. In this study, a relative new electrophysiologic index of cognitive control, the error-related negativity, is investigated to examine its suitability as a predictor of relapse. The error-related negativity was measured in 57 cocaine-dependent patients during their first week in detoxification treatment. Data from 49 participants were used to predict cocaine use at 3-month follow-up. Cocaine use at follow-up was measured by means of self-reported days of cocaine use in the last month verified by urine screening. A multiple hierarchical regression model was used to examine the predictive value of the error-related negativity while controlling for addiction severity and self-reported craving in the week before treatment. The error-related negativity was the only significant predictor in the model and added 7.4% of explained variance to the control variables, resulting in a total of 33.4% explained variance in the prediction of days of cocaine use at follow-up. A reduced error-related negativity measured during the first week of treatment was associated with more days of cocaine use at 3-month follow-up. Moreover, the error-related negativity was a stronger predictor of recent cocaine use than addiction severity and craving. These results suggest that underactive error-related brain activity might help to identify patients who are at risk of relapse as early as in the first week of detoxification treatment. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder

    Science.gov (United States)

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…

  12. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    Science.gov (United States)

    2016-03-01

    CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS WITH MEASUREMENTS OF FORECAST UNCERTAINTY by Nicholas M. Chisler March 2016 Thesis Advisor...March 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE RELATING TROPICAL CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS...WITH MEASUREMENTS OF FORECAST UNCERTAINTY 5. FUNDING NUMBERS 6. AUTHOR(S) Nicholas M. Chisler 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  13. Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods

    International Nuclear Information System (INIS)

    Baker, A.R.

    1982-07-01

    A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)

  14. Event-Related Potentials for Post-Error and Post-Conflict Slowing

    Science.gov (United States)

    Chang, Andrew; Chen, Chien-Chung; Li, Hsin-Hung; Li, Chiang-Shan R.

    2014-01-01

    In a reaction time task, people typically slow down following an error or conflict, each called post-error slowing (PES) and post-conflict slowing (PCS). Despite many studies of the cognitive mechanisms, the neural responses of PES and PCS continue to be debated. In this study, we combined high-density array EEG and a stop-signal task to examine event-related potentials of PES and PCS in sixteen young adult participants. The results showed that the amplitude of N2 is greater during PES but not PCS. In contrast, the peak latency of N2 is longer for PCS but not PES. Furthermore, error-positivity (Pe) but not error-related negativity (ERN) was greater in the stop error trials preceding PES than non-PES trials, suggesting that PES is related to participants' awareness of the error. Together, these findings extend earlier work of cognitive control by specifying the neural correlates of PES and PCS in the stop signal task. PMID:24932780

  15. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  16. Intelligence and Neurophysiological Markers of Error Monitoring Relate to Children's Intellectual Humility.

    Science.gov (United States)

    Danovitch, Judith H; Fisher, Megan; Schroder, Hans; Hambrick, David Z; Moser, Jason

    2017-09-18

    This study explored developmental and individual differences in intellectual humility (IH) among 127 children ages 6-8. IH was operationalized as children's assessment of their knowledge and willingness to delegate scientific questions to experts. Children completed measures of IH, theory of mind, motivational framework, and intelligence, and neurophysiological measures indexing early (error-related negativity [ERN]) and later (error positivity [Pe]) error-monitoring processes related to cognitive control. Children's knowledge self-assessment correlated with question delegation, and older children showed greater IH than younger children. Greater IH was associated with higher intelligence but not with social cognition or motivational framework. ERN related to self-assessment, whereas Pe related to question delegation. Thus, children show separable epistemic and social components of IH that may differentially contribute to metacognition and learning. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  17. Masked and unmasked error-related potentials during continuous control and feedback

    Science.gov (United States)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the

  18. Calculating excess lifetime risk in relative risk models

    International Nuclear Information System (INIS)

    Vaeth, M.; Pierce, D.A.

    1990-01-01

    When assessing the impact of radiation exposure it is common practice to present the final conclusions in terms of excess lifetime cancer risk in a population exposed to a given dose. The present investigation is mainly a methodological study focusing on some of the major issues and uncertainties involved in calculating such excess lifetime risks and related risk projection methods. The age-constant relative risk model used in the recent analyses of the cancer mortality that was observed in the follow-up of the cohort of A-bomb survivors in Hiroshima and Nagasaki is used to describe the effect of the exposure on the cancer mortality. In this type of model the excess relative risk is constant in age-at-risk, but depends on the age-at-exposure. Calculation of excess lifetime risks usually requires rather complicated life-table computations. In this paper we propose a simple approximation to the excess lifetime risk; the validity of the approximation for low levels of exposure is justified empirically as well as theoretically. This approximation provides important guidance in understanding the influence of the various factors involved in risk projections. Among the further topics considered are the influence of a latent period, the additional problems involved in calculations of site-specific excess lifetime cancer risks, the consequences of a leveling off or a plateau in the excess relative risk, and the uncertainties involved in transferring results from one population to another. The main part of this study relates to the situation with a single, instantaneous exposure, but a brief discussion is also given of the problem with a continuous exposure at a low-dose rate

  19. Electrophysiological Endophenotypes and the Error-Related Negativity (ERN) in Autism Spectrum Disorder: A Family Study

    Science.gov (United States)

    Clawson, Ann; South, Mikle; Baldwin, Scott A.; Larson, Michael J.

    2017-01-01

    We examined the error-related negativity (ERN) as an endophenotype of ASD by comparing the ERN in families of ASD probands to control families. We hypothesized that ASD probands and families would display reduced-amplitude ERN relative to controls. Participants included 148 individuals within 39 families consisting of a mother, father, sibling,…

  20. Senior High School Students' Errors on the Use of Relative Words

    Science.gov (United States)

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  1. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    Science.gov (United States)

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  2. Geometrical correction for the inter- and intramolecular basis set superposition error in periodic density functional theory calculations.

    Science.gov (United States)

    Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan

    2013-09-26

    We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.

  3. Relative efficiency calculation of a HPGe detector using MCNPX code

    International Nuclear Information System (INIS)

    Medeiros, Marcos P.C.; Rebello, Wilson F.; Lopes, Jose M.; Silva, Ademir X.

    2015-01-01

    High-purity germanium detectors (HPGe) are mandatory tools for spectrometry because of their excellent energy resolution. The efficiency of such detectors, quoted in the list of specifications by the manufacturer, frequently refers to the relative full-energy peak efficiency, related to the absolute full-energy peak efficiency of a 7.6 cm x 7.6 cm (diameter x height) NaI(Tl) crystal, based on the 1.33 MeV peak of a 60 Co source positioned 25 cm from the detector. In this study, we used MCNPX code to simulate a HPGe detector (Canberra GC3020), from Real-Time Neutrongraphy Laboratory of UFRJ, to survey the spectrum of a 60 Co source located 25 cm from the detector in order to calculate and confirm the efficiency declared by the manufacturer. Agreement between experimental and simulated data was achieved. The model under development will be used for calculating and comparison purposes with the detector calibration curve from software Genie2000™, also serving as a reference for future studies. (author)

  4. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  5. Biases and statistical errors in Monte Carlo burnup calculations: an unbiased stochastic scheme to solve Boltzmann/Bateman coupled equations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C.M.

    2011-01-01

    External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation

  6. Age-related changes in error processing in young children: A school-based investigation

    Directory of Open Access Journals (Sweden)

    Jennie K. Grammer

    2014-07-01

    Full Text Available Growth in executive functioning (EF skills play a role children's academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN and the error positivity (Pe, over this period is inconclusive. Data were recorded in a school setting from 3- to 7-year-old children (N = 96, mean age = 5 years 11 months as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3 and 7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing.

  7. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  8. Social Errors in Four Cultures: Evidence about Universal Forms of Social Relations.

    Science.gov (United States)

    Fiske, Alan Page

    1993-01-01

    To test the cross-cultural generality of relational-models theory, 4 studies with 70 adults examined social errors of substitution of persons for Bengali, Korean, Chinese, and Vai (Liberia and Sierra Leone) subjects. In all four cultures, people tend to substitute someone with whom they have the same basic relationship. (SLD)

  9. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  10. Error-related ERP components and individual differences in punishment and reward sensitivity

    NARCIS (Netherlands)

    Boksem, Maarten A. S.; Tops, Mattie; Wester, Anne E.; Meijman, Theo F.; Lorist, Monique M.

    2006-01-01

    Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on

  11. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Science.gov (United States)

    2010-10-01

    ...) Challenges to determinations or an insufficient regulatory fee payment or delinquent fees should be made in writing. A challenge to a determination that a party is delinquent in paying a standard regulatory fee... 47 Telecommunication 1 2010-10-01 2010-10-01 false Error claims related to regulatory fees. 1.1167...

  12. Evaluation of errors set-up and setting margins calculation in treatments 3-D conformal radiotherapy; Evaluacion de errores de set-up y calculo de margenes de configuracion en tratamientos de radioterapia CONFORMADA 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Donis, S.; Robayna, B.; Gonzalez, A.; Hernandez Armas, J.

    2011-07-01

    The use of IGRT techniques provide knowledge of the mistakes made in the positioning of a patient, to population studies and estimate the margins for each population.In this paper we evaluate the errors of set-up in 3 different locations and from these margins are calculated configuration (SM).

  13. Error signals in the subthalamic nucleus are related to post-error slowing in patients with Parkinson's disease

    NARCIS (Netherlands)

    Siegert, S.; Herrojo Ruiz, M.; Brücke, C.; Hueble, J.; Schneider, H.G.; Ullsperger, M.; Kühn, A.A.

    2014-01-01

    Error monitoring is essential for optimizing motor behavior. It has been linked to the medial frontal cortex, in particular to the anterior midcingulate cortex (aMCC). The aMCC subserves its performance-monitoring function in interaction with the basal ganglia (BG) circuits, as has been demonstrated

  14. Working memory capacity and task goals modulate error-related ERPs.

    Science.gov (United States)

    Coleman, James R; Watson, Jason M; Strayer, David L

    2018-03-01

    The present study investigated individual differences in information processing following errant behavior. Participants were initially classified as high or as low working memory capacity using the Operation Span Task. In a subsequent session, they then performed a high congruency version of the flanker task under both speed and accuracy stress. We recorded ERPs and behavioral measures of accuracy and response time in the flanker task with a primary focus on processing following an error. The error-related negativity was larger for the high working memory capacity group than for the low working memory capacity group. The positivity following an error (Pe) was modulated to a greater extent by speed-accuracy instruction for the high working memory capacity group than for the low working memory capacity group. These data help to explicate the neural bases of individual differences in working memory capacity and cognitive control. © 2017 Society for Psychophysiological Research.

  15. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Science.gov (United States)

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  16. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Directory of Open Access Journals (Sweden)

    Zrinka Sosic-Vasic

    Full Text Available The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI. A second strong positive correlation was observed in the anterior cingulate gyrus (ACC. Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  17. Relative Hazard and Risk Measure Calculation Methodology Rev 1

    International Nuclear Information System (INIS)

    Stenner, Robert D.; White, Michael K.; Strenge, Dennis L.; Aaberg, Rosanne L.; Andrews, William B.

    2000-01-01

    Documentation of the methodology used to calculate relative hazard and risk measure results for the DOE complex wide risk profiles. This methodology is used on major site risk profiles. In February 1997, the Center for Risk Excellence (CRE) was created and charged as a technical, field-based partner to the Office of Science and Risk Policy (EM-52). One of the initial charges to the CRE is to assist the sites in the development of ''site risk profiles.'' These profiles are to be relatively short summaries (periodically updated) that present a broad perspective on the major risk related challenges that face the respective site. The risk profiles are intended to serve as a high-level communication tool for interested internal and external parties to enhance the understanding of these risk-related challenges. The risk profiles for each site have been designed to qualitatively present the following information: (1) a brief overview of the site, (2) a brief discussion on the historical mission of the site, (3) a quote from the site manager indicating the site's commitment to risk management, (4) a listing of the site's top risk-related challenges, (5) a brief discussion and detailed table presenting the site's current risk picture, (6) a brief discussion and detailed table presenting the site's future risk reduction picture, and (7) graphic illustrations of the projected management of the relative hazards at the site. The graphic illustrations were included to provide the reader of the risk profiles with a high-level mental picture to associate with all the qualitative information presented in the risk profile. Inclusion of these graphic illustrations presented the CRE with the challenge of how to fold this high-level qualitative risk information into a system to produce a numeric result that would depict the relative change in hazard, associated with each major risk management action, so it could be presented graphically. This report presents the methodology developed

  18. Error-related negativity and tic history in pediatric obsessive-compulsive disorder.

    Science.gov (United States)

    Hanna, Gregory L; Carrasco, Melisa; Harbin, Shannon M; Nienhuis, Jenna K; LaRosa, Christina E; Chen, Poyu; Fitzgerald, Kate D; Gehring, William J

    2012-09-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes in patients with tic-related OCD, patients with non-tic-related OCD, and healthy controls. The ERN, correct response negativity, and error number were measured during an Eriksen flanker task to assess performance monitoring in 44 youth with a lifetime diagnosis of OCD and 44 matched healthy controls ranging in age from 10 to 19 years. Nine youth with OCD had a lifetime history of tics. ERN amplitude was significantly increased in patients with OCD compared with healthy controls. ERN amplitude was significantly larger in patients with non-tic-related OCD than in patients with tic-related OCD or controls. ERN amplitude had a significant negative correlation with age in healthy controls but not in patients with OCD. Instead, in patients with non-tic-related OCD, ERN amplitude had a significant positive correlation with age at onset of OCD symptoms. ERN amplitude in patients was unrelated to OCD symptom severity, current diagnostic status, or treatment effects. The results provide further evidence of increased error-related brain activity in pediatric OCD. The difference in the ERN between patients with tic-related and those with non-tic-related OCD provides preliminary evidence of a neurobiological difference between these two OCD subtypes. The results indicate the ERN is a trait-like measurement that may serve as a biomarker for non-tic-related OCD. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  19. Task types and error types involved in the human-related unplanned reactor trip events

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2008-01-01

    In this paper, the contribution of task types and error types involved in the human-related unplanned reactor trip events that have occurred between 1986 and 2006 in Korean nuclear power plants are analysed in order to establish a strategy for reducing the human-related unplanned reactor trips. Classification systems for the task types, error modes, and cognitive functions are developed or adopted from the currently available taxonomies, and the relevant information is extracted from the event reports or judged on the basis of an event description. According to the analyses from this study, the contributions of the task types are as follows: corrective maintenance (25.7%), planned maintenance (22.8%), planned operation (19.8%), periodic preventive maintenance (14.9%), response to a transient (9.9%), and design/manufacturing/installation (6.9%). According to the analysis of the error modes, error modes such as control failure (22.2%), wrong object (18.5%), omission (14.8%), wrong action (11.1%), and inadequate (8.3%) take up about 75% of the total unplanned trip events. The analysis of the cognitive functions involved in the events indicated that the planning function had the highest contribution (46.7%) to the human actions leading to unplanned reactor trips. This analysis concludes that in order to significantly reduce human-induced or human-related unplanned reactor trips, an aide system (in support of maintenance personnel) for evaluating possible (negative) impacts of planned actions or erroneous actions as well as an appropriate human error prediction technique, should be developed

  20. Task types and error types involved in the human-related unplanned reactor trip events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-12-15

    In this paper, the contribution of task types and error types involved in the human-related unplanned reactor trip events that have occurred between 1986 and 2006 in Korean nuclear power plants are analysed in order to establish a strategy for reducing the human-related unplanned reactor trips. Classification systems for the task types, error modes, and cognitive functions are developed or adopted from the currently available taxonomies, and the relevant information is extracted from the event reports or judged on the basis of an event description. According to the analyses from this study, the contributions of the task types are as follows: corrective maintenance (25.7%), planned maintenance (22.8%), planned operation (19.8%), periodic preventive maintenance (14.9%), response to a transient (9.9%), and design/manufacturing/installation (6.9%). According to the analysis of the error modes, error modes such as control failure (22.2%), wrong object (18.5%), omission (14.8%), wrong action (11.1%), and inadequate (8.3%) take up about 75% of the total unplanned trip events. The analysis of the cognitive functions involved in the events indicated that the planning function had the highest contribution (46.7%) to the human actions leading to unplanned reactor trips. This analysis concludes that in order to significantly reduce human-induced or human-related unplanned reactor trips, an aide system (in support of maintenance personnel) for evaluating possible (negative) impacts of planned actions or erroneous actions as well as an appropriate human error prediction technique, should be developed.

  1. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder (OCD)

    Science.gov (United States)

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective The error-related negativity (ERN) is a negative deflection in the event-related potential following an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relationship of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes in patients with tic-related OCD, patients with non-tic-related OCD, and healthy controls. Method The ERN, correct response negativity, and error number were measured during an Eriksen flanker task to assess performance monitoring in 44 youth with a lifetime diagnosis of OCD and 44 matched healthy controls ranging in age from 10 to 19 years. Nine youth with OCD had a lifetime history of tics. Results ERN amplitudewas significantly increased in OCD patients compared to healthy controls. ERN amplitude was significantly larger in patients with non-tic-related OCD than either patients with tic-related OCD or controls. ERN amplitude had a significant negative correlation with age in healthy controls but not patients with OCD. Instead, in patients with non-tic-related OCD, ERN amplitude had a significant positive correlation with age at onset of OCD symptoms. ERN amplitude in patients was unrelated to OCD symptom severity, current diagnostic status, or treatment effects. Conclusions The results provide further evidence of increased error-related brain activity in pediatric OCD. The difference in the ERN between patients with tic-related and non-tic-related OCD provides preliminary evidence of a neurobiological difference between these two OCD subtypes. The results indicate the ERN is a trait-like measure that may serve as a biomarker for non-tic-related OCD. PMID:22917203

  2. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  3. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Directory of Open Access Journals (Sweden)

    Małgorzata Kossowska

    2018-03-01

    Full Text Available Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure. We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400 due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure, religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure.

  4. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  5. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure). PMID:29636709

  6. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  7. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  8. Software platform for managing the classification of error- related potentials of observers

    Science.gov (United States)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  9. Calculation of coolant temperature sensitivity related to thermohydraulic parameters

    International Nuclear Information System (INIS)

    Silva, F.C. da; Andrade Lima, F.R. de

    1985-01-01

    It is verified the viability to apply the generalized Perturbation Theory (GPT) in the calculation of sensitivity for thermal-hydraulic problems. It was developed the TEMPERA code in FORTRAN-IV to transient calculations in the axial temperature distribution in a channel of PWR reactor and the associated importance function, as well as effects of variations of thermalhydraulic parameters in the coolant temperature. The results are compared with one which were obtained by direct calculation. (M.C.K.) [pt

  10. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  11. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    Science.gov (United States)

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  12. Error-related negativity varies with the activation of gender stereotypes.

    Science.gov (United States)

    Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin

    2008-09-19

    The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.

  13. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  14. The orthopaedic error index: development and application of a novel national indicator for assessing the relative safety of hospital care using a cross-sectional approach.

    Science.gov (United States)

    Panesar, Sukhmeet S; Netuveli, Gopalakrishnan; Carson-Stevens, Andrew; Javad, Sundas; Patel, Bhavesh; Parry, Gareth; Donaldson, Liam J; Sheikh, Aziz

    2013-11-21

    The Orthopaedic Error Index for hospitals aims to provide the first national assessment of the relative safety of provision of orthopaedic surgery. Cross-sectional study (retrospective analysis of records in a database). The National Reporting and Learning System is the largest national repository of patient-safety incidents in the world with over eight million error reports. It offers a unique opportunity to develop novel approaches to enhancing patient safety, including investigating the relative safety of different healthcare providers and specialties. We extracted all orthopaedic error reports from the system over 1 year (2009-2010). The Orthopaedic Error Index was calculated as a sum of the error propensity and severity. All relevant hospitals offering orthopaedic surgery in England were then ranked by this metric to identify possible outliers that warrant further attention. 155 hospitals reported 48 971 orthopaedic-related patient-safety incidents. The mean Orthopaedic Error Index was 7.09/year (SD 2.72); five hospitals were identified as outliers. Three of these units were specialist tertiary hospitals carrying out complex surgery; the remaining two outlier hospitals had unusually high Orthopaedic Error Indexes: mean 14.46 (SD 0.29) and 15.29 (SD 0.51), respectively. The Orthopaedic Error Index has enabled identification of hospitals that may be putting patients at disproportionate risk of orthopaedic-related iatrogenic harm and which therefore warrant further investigation. It provides the prototype of a summary index of harm to enable surveillance of unsafe care over time across institutions. Further validation and scrutiny of the method will be required to assess its potential to be extended to other hospital specialties in the UK and also internationally to other health systems that have comparable national databases of patient-safety incidents.

  15. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  16. Classification of Error Related Brain Activity in an Auditory Identification Task with Conditions of Varying Complexity

    Science.gov (United States)

    Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.

    2017-11-01

    The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.

  17. On nonstationarity-related errors in modal combination rules of the response spectrum method

    Science.gov (United States)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  18. User Performance Evaluation of Four Blood Glucose Monitoring Systems Applying ISO 15197:2013 Accuracy Criteria and Calculation of Insulin Dosing Errors.

    Science.gov (United States)

    Freckmann, Guido; Jendrike, Nina; Baumstark, Annette; Pleus, Stefan; Liebing, Christina; Haug, Cornelia

    2018-04-01

    The international standard ISO 15197:2013 requires a user performance evaluation to assess if intended users are able to obtain accurate blood glucose measurement results with a self-monitoring of blood glucose (SMBG) system. In this study, user performance was evaluated for four SMBG systems on the basis of ISO 15197:2013, and possibly related insulin dosing errors were calculated. Additionally, accuracy was assessed in the hands of study personnel. Accu-Chek ® Performa Connect (A), Contour ® plus ONE (B), FreeStyle Optium Neo (C), and OneTouch Select ® Plus (D) were evaluated with one test strip lot. After familiarization with the systems, subjects collected a capillary blood sample and performed an SMBG measurement. Study personnel observed the subjects' measurement technique. Then, study personnel performed SMBG measurements and comparison measurements. Number and percentage of SMBG measurements within ± 15 mg/dl and ± 15% of the comparison measurements at glucose concentrations performed by lay-users. The study was registered at ClinicalTrials.gov (NCT02916576). Ascensia Diabetes Care Deutschland GmbH.

  19. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  20. Validation of the calculation of the renal impulse response function. An analysis of errors and systematic biases

    International Nuclear Information System (INIS)

    Erbsman, F.; Ham, H.; Piepsz, A.; Struyven, J.

    1978-01-01

    The renal impulse response function (Renal IRF) is the time-activity curve measured over one kidney after injection of a radiopharmaceutical in the renal artery. If the tracer is injected intravenously it is possible to compute the renal IRF by deconvoluting the kidney curve by a blood curve. In previous work we demonstrated that the computed IRF is in good agreement with measurements made after injection in the renal artery. The goal of the present work is the analysis of the effect of sampling errors and the influence of extra-renal activity. The sampling error is only important for the first point of the plasma curve and yields an ill-conditioned function P -1 . The addition of 50 computed renal IRF's demonstrated that the three first points show a larger variability due to incomplete mixing of the tracer. These points should thus not be included in the smoothing process. Subtraction of non-renal activity does not modify appreciably the shape of the renal IRF. The mean transit time and the time to half value are almost independent of non-renal activity and seem to be the parameters of choice

  1. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  2. An investigation of Saudi Arabian MR radiographers' knowledge and confidence in relation to MR image-quality-related errors

    International Nuclear Information System (INIS)

    Alsharif, W.; Davis, M.; McGee, A.; Rainford, L.

    2017-01-01

    Objective: To investigate MR radiographers' current knowledge base and confidence level in relation to quality-related errors within MR images. Method: Thirty-five MR radiographers within 16 MRI departments in the Kingdom of Saudi Arabia (KSA) independently reviewed a prepared set of 25 MR images, naming the error, specifying the error-correction strategy, scoring how confident they were in recognising this error and suggesting a correction strategy by using a scale of 1–100. The datasets were obtained from MRI departments in the KSA to represent the range of images which depicted excellent, acceptable and poor image quality. Results: The findings demonstrated a low level of radiographer knowledge in identifying the type of quality errors and when suggesting an appropriate strategy to rectify those errors. The findings show that only (n = 7) 20% of the radiographers could correctly name what the quality errors were in 70% of the dataset, and none of the radiographers correctly specified the error-correction strategy in more than 68% of the MR datasets. The confidence level of radiography participants in their ability to state the type of image quality errors was significantly different (p < 0.001) for who work in different hospital types. Conclusion: The findings of this study suggest there is a need to establish a national association for MR radiographers to monitor training and the development of postgraduate MRI education in Saudi Arabia to improve the current status of the MR radiographers' knowledge and direct high quality service delivery. - Highlights: • MR radiographers recognised the existence of the image quality related errors. • A few MR radiographers were able to correctly identify which image quality errors were being shown. • None of MR radiographers were able to correctly specify error-correction strategy of the image quality errors. • A low level of knowledge was demonstrated in identifying and rectify image quality errors.

  3. The relative impact of sizing errors on steam generator tube failure probability

    International Nuclear Information System (INIS)

    Cizelj, L.; Dvorsek, T.

    1998-01-01

    The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)

  4. Low pO2 Contributes to Potential Error in Oxygen Saturation Calculations Using a Point-of-Care Assay.

    Science.gov (United States)

    Gunsolus, Ian L; Love, Sara A; Kohl, Louis P; Schmidt, Martin; Apple, Fred S

    2017-12-20

    The present study addressed the accuracy of calculated oxygen saturation (sO2) using point-of-care (POC) testing compared with measured values on a blood gas analyzer. In total, 3,323 sO2 values were measured in 1,180 patients using a CO-oximeter (ABL 800 Flex; Radiometer, Copenhagen, Denmark). Measured parameters were then used to calculate an expected sO2 for the POC method (Abbott i-STAT; Abbott POC, Princeton, NJ). Cases in which calculated sO2 differed from measured sO2 by 10% or more were analyzed. Of the 3,323 comparisons performed, 260 (8%) showed discrepancies (± ≥10%) between measured and calculated sO2 values. Ninety-four of discrepant measurements (245 of 260) occurred when pO2 was less than 50 mm Hg. pH and bicarbonate distributions shifted to lower values in discrepant vs nondiscrepant cases. Our results suggest that the likelihood of discrepant sO2 is 27% among patients with pO2 less than 50 mm Hg. Direct measurement of sO2 by CO-oximetry is strongly suggested in this clinical scenario. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  5. Invariance and variability in interaction error-related potentials and their consequences for classification

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Kapeller, Christoph; Hintermüller, Christoph; Guger, Christoph; Peer, Angelika

    2017-12-01

    Objective. This paper discusses the invariance and variability in interaction error-related potentials (ErrPs), where a special focus is laid upon the factors of (1) the human mental processing required to assess interface actions (2) time (3) subjects. Approach. Three different experiments were designed as to vary primarily with respect to the mental processes that are necessary to assess whether an interface error has occurred or not. The three experiments were carried out with 11 subjects in a repeated-measures experimental design. To study the effect of time, a subset of the recruited subjects additionally performed the same experiments on different days. Main results. The ErrP variability across the different experiments for the same subjects was found largely attributable to the different mental processing required to assess interface actions. Nonetheless, we found that interaction ErrPs are empirically invariant over time (for the same subject and same interface) and to a lesser extent across subjects (for the same interface). Significance. The obtained results may be used to explain across-study variability of ErrPs, as well as to define guidelines for approaches to the ErrP classifier transferability problem.

  6. Method for evaluation of risk due to seismic related design and construction errors based on past reactor experience

    International Nuclear Information System (INIS)

    Gonzalez Cuesta, M.; Okrent, D.

    1985-01-01

    This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered

  7. Differences among Job Positions Related to Communication Errors at Construction Sites

    Science.gov (United States)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  8. Linear constraint relations in biochemical reaction systems: I. Classification of the calculability and the balanceability of conversion rates.

    Science.gov (United States)

    van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C

    1994-01-05

    Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons

  9. Relation of anthropometric measurements to ocular biometric changes and refractive error in children with thalassemia.

    Science.gov (United States)

    Elkitkat, Rania S; El-Shazly, Amany A; Ebeid, Weam M; Deghedy, Marwa R

    2018-03-01

    To evaluate and correlate anthropometric, biometric, and refractive error changes in thalassemia major (TM). One hundred children with TM and another hundred healthy controls were recruited. Height, weight, body mass index (BMI), and occipitofrontal circumference (OFC) were the anthropometric parameters recorded. Full ophthalmologic examination was performed, including best-corrected visual acuity, cycloplegic refraction, slit-lamp examination, Goldmann applanation tonometry, indirect ophthalmoscopy, keratometry (K readings), and ocular biometry. Compared to controls, children with TM were shorter and lighter, with a smaller BMI (pbiometric data, patients with thalassemia had steeper mean K readings (p = 0.03), shorter axial length (AXL) (p = 0.005), shorter vitreous chamber depth (pbiometric changes (steeper corneas and thicker lenses) to reach emmetropization, with an exaggerated response and subsequent myopic shift. However, growth retardation is not directly related to ocular growth changes, myopic shift, or variations in biometric parameters.

  10. Using brain potentials to understand prism adaptation: the error-related negativity and the P300

    Directory of Open Access Journals (Sweden)

    Stephane Joseph Maclean

    2015-06-01

    Full Text Available Prism adaptation (PA is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN – a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuo-motor responding is shifted to the opposite direction of that initially induced by the prisms. This visuo-motor aftereffect has been used to study visuo-motor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided space. In order to optimize PA’s use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs were recorded at the termination of each reach (screen-touch, then binned according to accuracy (hit vs. miss and phase of exposure block (early, middle, late. Results show that two ERP components were evoked by screen-touch: an early error-related negativity (ERN, and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN.

  11. Using brain potentials to understand prism adaptation: the error-related negativity and the P300.

    Science.gov (United States)

    MacLean, Stephane J; Hassall, Cameron D; Ishigami, Yoko; Krigolson, Olav E; Eskes, Gail A

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)-a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN.

  12. Practical Insights from Initial Studies Related to Human Error Analysis Project (HEAP)

    International Nuclear Information System (INIS)

    Follesoe, Knut; Kaarstad, Magnhild; Droeivoldsmo, Asgeir; Hollnagel, Erik; Kirwan; Barry

    1996-01-01

    This report presents practical insights made from an analysis of the three initial studies in the Human Error Analysis Project (HEAP), and the first study in the US NRC Staffing Project. These practical insights relate to our understanding of diagnosis in Nuclear Power Plant (NPP) emergency scenarios and, in particular, the factors that influence whether a diagnosis will succeed or fail. The insights reported here focus on three inter-related areas: (1) the diagnostic strategies and styles that have been observed in single operator and team-based studies; (2) the qualitative aspects of the key operator support systems, namely VDU interfaces, alarms, training and procedures, that have affected the outcome of diagnosis; and (3) the overall success rates of diagnosis and the error types that have been observed in the various studies. With respect to diagnosis, certain patterns have emerged from the various studies, depending on whether operators were alone or in teams, and on their familiarity with the process. Some aspects of the interface and alarm systems were found to contribute to diagnostic failures while others supported performance and recovery. Similar results were found for training and experience. Furthermore, the availability of procedures did not preclude the need for some diagnosis. With respect to HRA and PSA, it was possible to record the failure types seen in the studies, and in some cases to give crude estimates of the failure likelihood for certain scenarios. Although these insights are interim in nature, they do show the type of information that can be derived from these studies. More importantly, they clarify aspects of our understanding of diagnosis in NPP emergencies, including implications for risk assessment, operator support systems development, and for research into diagnosis in a broader range of fields than the nuclear power industry. (author)

  13. Pregnancy-related anxiety and depressive symptoms are associated with visuospatial working memory errors during pregnancy.

    Science.gov (United States)

    Kataja, E-L; Karlsson, L; Huizink, A C; Tolvanen, M; Parsons, C; Nolvi, S; Karlsson, H

    2017-08-15

    Cognitive deficits, especially in memory and concentration, are often reported during pregnancy. Similar cognitive dysfunctions can also occur in depression and anxiety. To date, few studies have investigated the associations between cognitive deficits and psychiatric symptoms during pregnancy. This field is of interest because maternal cognitive functioning, and particularly its higher-order aspects are related to maternal well-being and caregiving behavior, as well as later child development. Pregnant women (N =230), reporting low (n =87), moderate (n =97), or high (n =46) levels of depressive, general anxiety and/or pregnancy-related anxiety symptoms (assessed repeatedly with EPDS, SCL-90/anxiety subscale, PRAQ-R2, respectively) were tested in mid-pregnancy for their cognitive functions. A computerized neuropsychological test battery was used. Pregnant women with high or moderate level of psychiatric symptoms had significantly more errors in visuospatial working memory/executive functioning task than mothers with low symptom level. Depressive symptoms throughout pregnancy and concurrent pregnancy-related anxiety symptoms were significant predictors of the performance in the task. General anxiety symptoms were not related to visuospatial working memory. Cognitive functions were evaluated only at one time-point during pregnancy precluding causal conclusions. Maternal depressive symptoms and pregnancy-related anxiety symptoms were both associated with decrements in visuospatial working memory/executive functioning. Depressive symptoms seem to present more stable relationship with cognitive deficits, while pregnancy-related anxiety was associated only concurrently. Future studies could investigate, how stable these cognitive differences are, and whether they affect maternal ability to deal with demands of pregnancy and later parenting. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Harsh parenting and fearfulness in toddlerhood interact to predict amplitudes of preschool error-related negativity

    Directory of Open Access Journals (Sweden)

    Rebecca J. Brooker

    2014-07-01

    Full Text Available Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN, an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems.

  15. Harsh parenting and fearfulness in toddlerhood interact to predict amplitudes of preschool error-related negativity.

    Science.gov (United States)

    Brooker, Rebecca J; Buss, Kristin A

    2014-07-01

    Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN), an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    Science.gov (United States)

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  17. How to deal with multiple binding poses in alchemical relative protein-ligand binding free energy calculations.

    Science.gov (United States)

    Kaus, Joseph W; Harder, Edward; Lin, Teng; Abel, Robert; McCammon, J Andrew; Wang, Lingle

    2015-06-09

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the

  18. How To Deal with Multiple Binding Poses in Alchemical Relative Protein–Ligand Binding Free Energy Calculations

    Science.gov (United States)

    2016-01-01

    Recent advances in improved force fields and sampling methods have made it possible for the accurate calculation of protein–ligand binding free energies. Alchemical free energy perturbation (FEP) using an explicit solvent model is one of the most rigorous methods to calculate relative binding free energies. However, for cases where there are high energy barriers separating the relevant conformations that are important for ligand binding, the calculated free energy may depend on the initial conformation used in the simulation due to the lack of complete sampling of all the important regions in phase space. This is particularly true for ligands with multiple possible binding modes separated by high energy barriers, making it difficult to sample all relevant binding modes even with modern enhanced sampling methods. In this paper, we apply a previously developed method that provides a corrected binding free energy for ligands with multiple binding modes by combining the free energy results from multiple alchemical FEP calculations starting from all enumerated poses, and the results are compared with Glide docking and MM-GBSA calculations. From these calculations, the dominant ligand binding mode can also be predicted. We apply this method to a series of ligands that bind to c-Jun N-terminal kinase-1 (JNK1) and obtain improved free energy results. The dominant ligand binding modes predicted by this method agree with the available crystallography, while both Glide docking and MM-GBSA calculations incorrectly predict the binding modes for some ligands. The method also helps separate the force field error from the ligand sampling error, such that deviations in the predicted binding free energy from the experimental values likely indicate possible inaccuracies in the force field. An error in the force field for a subset of the ligands studied was identified using this method, and improved free energy results were obtained by correcting the partial charges assigned to the

  19. Hair 2010 Documentation: Calculating risk indicators related to agricultural use of pesticides within the European Union

    NARCIS (Netherlands)

    Kruijne, R.; Deneer, J.W.; Lahr, J.; Vlaming, J.

    2011-01-01

    The HAIR instrument calculates risk indicators related to the agricultural use of pesticides in EU Member States. HAIR combines databases and models for calculating potential environmental environmental effects expressed by the exposure toxicity ratio.

  20. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  1. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  2. Use of Balance Calibration Certificate to Calculate the Errors of Indication and Measurement Uncertainty in Mass Determinations Performed in Medical Laboratories

    Directory of Open Access Journals (Sweden)

    Adriana VÂLCU

    2011-09-01

    Full Text Available Based on the reference document, the article proposes the way to calculate the errors of indication and associated measurement uncertainties, by resorting to the general information provided by the calibration certificate of a balance (non-automatic weighing instruments, shortly NAWI used in medical field. The paper may be also considered a useful guideline for: operators working in laboratories accredited in medical (or other various fields where the weighing operations are part of their testing activities; test houses, laboratories, or manufacturers using calibrated non-automatic weighing instruments for measurements relevant for the quality of production subject to QM requirements (e.g. ISO 9000 series, ISO 10012, ISO/IEC 17025; bodies accrediting laboratories; accredited laboratories for the calibration of NAWI. Article refers only to electronic weighing instruments having maximum capacity up to 30 kg. Starting from the results provided by a calibration certificate it is presented an example of calculation.

  3. Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

    Science.gov (United States)

    Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank

    2017-12-14

    Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

  4. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  5. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    Science.gov (United States)

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.

  6. A novel method for calculating the dynamic capillary force and correcting the pressure error in micro-tube experiment.

    Science.gov (United States)

    Wang, Shuoliang; Liu, Pengcheng; Zhao, Hui; Zhang, Yuan

    2017-11-29

    Micro-tube experiment has been implemented to understand the mechanisms of governing microcosmic fluid percolation and is extensively used in both fields of micro electromechanical engineering and petroleum engineering. The measured pressure difference across the microtube is not equal to the actual pressure difference across the microtube. Taking into account the additional pressure losses between the outlet of the micro tube and the outlet of the entire setup, we propose a new method for predicting the dynamic capillary pressure using the Level-set method. We first demonstrate it is a reliable method for describing microscopic flow by comparing the micro-model flow-test results against the predicted results using the Level-set method. In the proposed approach, Level-set method is applied to predict the pressure distribution along the microtube when the fluids flow along the microtube at a given flow rate; the microtube used in the calculation has the same size as the one used in the experiment. From the simulation results, the pressure difference across a curved interface (i.e., dynamic capillary pressure) can be directly obtained. We also show that dynamic capillary force should be properly evaluated in the micro-tube experiment in order to obtain the actual pressure difference across the microtube.

  7. Prediction of human errors by maladaptive changes in event-related brain networks

    NARCIS (Netherlands)

    Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we

  8. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  9. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  10. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    International Nuclear Information System (INIS)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  11. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    Science.gov (United States)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  12. Data on simulated interpersonal touch, individual differences and the error-related negativity

    Directory of Open Access Journals (Sweden)

    Mandy Tjew-A-Sin

    2016-06-01

    Full Text Available The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044 (Tjew-A-Sin et al., 2016 [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg.

  13. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    Directory of Open Access Journals (Sweden)

    Michael J Larson

    2013-07-01

    Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  14. Diagnostic errors related to acute abdominal pain in the emergency department.

    Science.gov (United States)

    Medford-Davis, Laura; Park, Elizabeth; Shlamovitz, Gil; Suliburk, James; Meyer, Ashley N D; Singh, Hardeep

    2016-04-01

    Diagnostic errors in the emergency department (ED) are harmful and costly. We reviewed a selected high-risk cohort of patients presenting to the ED with abdominal pain to evaluate for possible diagnostic errors and associated process breakdowns. We conducted a retrospective chart review of ED patients >18 years at an urban academic hospital. A computerised 'trigger' algorithm identified patients possibly at high risk for diagnostic errors to facilitate selective record reviews. The trigger determined patients to be at high risk because they: (1) presented to the ED with abdominal pain, and were discharged home and (2) had a return ED visit within 10 days that led to a hospitalisation. Diagnostic errors were defined as missed opportunities to make a correct or timely diagnosis based on the evidence available during the first ED visit, regardless of patient harm, and included errors that involved both ED and non-ED providers. Errors were determined by two independent record reviewers followed by team consensus in cases of disagreement. Diagnostic errors occurred in 35 of 100 high-risk cases. Over two-thirds had breakdowns involving the patient-provider encounter (most commonly history-taking or ordering additional tests) and/or follow-up and tracking of diagnostic information (most commonly follow-up of abnormal test results). The most frequently missed diagnoses were gallbladder pathology (n=10) and urinary infections (n=5). Diagnostic process breakdowns in ED patients with abdominal pain most commonly involved history-taking, ordering insufficient tests in the patient-provider encounter and problems with follow-up of abnormal test results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    International Nuclear Information System (INIS)

    Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.

    2015-01-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  16. Motoneuron axon pathfinding errors in zebrafish: Differential effects related to concentration and timing of nicotine exposure

    Energy Technology Data Exchange (ETDEWEB)

    Menelaou, Evdokia; Paul, Latoya T. [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Perera, Surangi N. [Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States); Svoboda, Kurt R., E-mail: svobodak@uwm.edu [Department of Biological Sciences, Louisiana State University, Baton Rouge, LA 70803 (United States); Joseph J. Zilber School of Public Health, University of Wisconsin — Milwaukee, Milwaukee, WI 53205 (United States)

    2015-04-01

    Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMNs). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30 μM). Previous work showed that the paralytic mutant zebrafish known as sofa potato exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. - Highlights: • Embryonic nicotine exposure can specifically affect secondary motoneuron axons in a dose-dependent manner.

  17. Prediction beyond the borders: ERP indices of boundary extension-related error.

    Science.gov (United States)

    Czigler, István; Intraub, Helene; Stefanics, Gábor

    2013-01-01

    Boundary extension (BE) is a rapidly occurring memory error in which participants incorrectly remember having seen beyond the boundaries of a view. However, behavioral data has provided no insight into how quickly after the onset of a test picture the effect is detected. To determine the time course of BE from neural responses we conducted a BE experiment while recording EEG. We exploited a diagnostic response asymmetry to mismatched views (a closer and wider view of the same scene) in which the same pair of views is rated as more similar when the closer item is shown first than vice versa. On each trial, a closer or wider view was presented for 250 ms followed by a 250-ms mask and either the identical view or a mismatched view. Boundary ratings replicated the typical asymmetry. We found a similar asymmetry in ERP responses in the 265-285 ms interval where the second member of the close-then-wide pairs evoked less negative responses at left parieto-temporal sites compared to the wide-then-close condition. We also found diagnostic ERP effects in the 500-560 ms range, where ERPs to wide-then-close pairs were more positive at centro-parietal sites than in the other three conditions, which is thought to be related to participants' confidence in their perceptual decision. The ERP effect in the 265-285 ms range suggests the falsely remembered region beyond the view-boundaries of S1 is rapidly available and impacts assessment of the test picture within the first 265 ms of viewing, suggesting that extrapolated scene structure may be computed rapidly enough to play a role in the integration of successive views during visual scanning.

  18. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    Science.gov (United States)

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  19. Assessment of the pseudo-tracking approach for the calculation of material acceleration and pressure fields from time-resolved PIV: part I. Error propagation

    Science.gov (United States)

    van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.

    2018-04-01

    Pseudo-tracking refers to the construction of imaginary particle paths from PIV velocity fields and the subsequent estimation of the particle (material) acceleration. In view of the variety of existing and possible alternative ways to perform the pseudo-tracking method, it is not straightforward to select a suitable combination of numerical procedures for its implementation. To address this situation, this paper extends the theoretical framework for the approach. The developed theory is verified by applying various implementations of pseudo-tracking to a simulated PIV experiment. The findings of the investigations allow us to formulate the following insights and practical recommendations: (1) the velocity errors along the imaginary particle track are primarily a function of velocity measurement errors and spatial velocity gradients; (2) the particle path may best be calculated with second-order accurate numerical procedures while ensuring that the CFL condition is met; (3) least-square fitting of a first-order polynomial is a suitable method to estimate the material acceleration from the track; and (4) a suitable track length may be selected on the basis of the variation in material acceleration with track length.

  20. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  1. Did I Do That? Expectancy Effects of Brain Stimulation on Error-related Negativity and Sense of Agency.

    Science.gov (United States)

    Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel

    2018-06-19

    This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.

  2. Individual Differences in Working Memory Capacity Predict Action Monitoring and the Error-Related Negativity

    Science.gov (United States)

    Miller, A. Eve; Watson, Jason M.; Strayer, David L.

    2012-01-01

    Neuroscience suggests that the anterior cingulate cortex (ACC) is responsible for conflict monitoring and the detection of errors in cognitive tasks, thereby contributing to the implementation of attentional control. Though individual differences in frontally mediated goal maintenance have clearly been shown to influence outward behavior in…

  3. The content of lexical stimuli and self-reported physiological state modulate error-related negativity amplitude.

    Science.gov (United States)

    Benau, Erik M; Moelter, Stephen T

    2016-09-01

    The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Task engagement and the relationships between the error-related negativity, agreeableness, behavioral shame proneness and cortisol

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.; Wester, Anne E.; Lorist, Monicque M.; Meijman, Theo F.

    Previous results suggest that both cortisol. mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels

  5. SCIAMACHY WFM-DOAS XCO2: reduction of scattering related errors

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2012-10-01

    Full Text Available Global observations of column-averaged dry air mole fractions of carbon dioxide (CO2, denoted by XCO2 , retrieved from SCIAMACHY on-board ENVISAT can provide important and missing global information on the distribution and magnitude of regional CO2 surface fluxes. This application has challenging precision and accuracy requirements. In a previous publication (Heymann et al., 2012, it has been shown by analysing seven years of SCIAMACHY WFM-DOAS XCO2 (WFMDv2.1 that unaccounted thin cirrus clouds can result in significant errors. In order to enhance the quality of the SCIAMACHY XCO2 data product, we have developed a new version of the retrieval algorithm (WFMDv2.2, which is described in this manuscript. It is based on an improved cloud filtering and correction method using the 1.4 μm strong water vapour absorption and 0.76 μm O2-A bands. The new algorithm has been used to generate a SCIAMACHY XCO2 data set covering the years 2003–2009. The new XCO2 data set has been validated using ground-based observations from the Total Carbon Column Observing Network (TCCON. The validation shows a significant improvement of the new product (v2.2 in comparison to the previous product (v2.1. For example, the standard deviation of the difference to TCCON at Darwin, Australia, has been reduced from 4 ppm to 2 ppm. The monthly regional-scale scatter of the data (defined as the mean intra-monthly standard deviation of all quality filtered XCO2 retrievals within a radius of 350 km around various locations has also been reduced, typically by a factor of about 1.5. Overall, the validation of the new WFMDv2.2 XCO2 data product can be summarised by a single measurement precision of 3.8 ppm, an estimated regional-scale (radius of 500 km precision of monthly averages of 1.6 ppm and an estimated regional-scale relative accuracy of 0.8 ppm. In addition to the comparison with the limited number of TCCON sites, we also present a comparison with NOAA's global CO2 modelling

  6. The value of pulmonary vessel CT measuring and calculating of relative ratio

    International Nuclear Information System (INIS)

    Ji Jiansong; Xu Xiaoxiong; Lv Suzhen; Zhao Zhongwei; Wang Zufei; Xu Min; Gong Jianping

    2004-01-01

    Objective: To evaluate value of CT measurement and calculation of vessels of isolate pig lung, by compare with measurement and calculation of resin cast of them. Methods: CT scanned and measured the four isolated pig lung which vessels were full with ABS liquid or self-solidification resin liquid, and calculate the relative ratio of superior/inferior order and vein/artery of same order. After resin cast were made, measure and calculate the same as CT did. Results: Second order of calculation of vein/artery of same order by the two method had statistic difference (P 0.05). Conclusion: CT has high value in calculation of the relative ratio of superior/inferior order

  7. An error-related negativity potential investigation of response monitoring function in individuals with Internet addiction disorder

    Directory of Open Access Journals (Sweden)

    Zhenhe eZhou

    2013-09-01

    Full Text Available Internet addiction disorder (IAD is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task.23 subjects were recruited as IAD group. 23 matched age, gender and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials (ERPs. IAD group made more total error rates than did controls (P < 0.01; Reactive times for total error responses in IAD group were shorter than did controls (P < 0.01. The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all P < 0.01. These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder.

  8. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    Directory of Open Access Journals (Sweden)

    Greg A. Breed

    2015-08-01

    Full Text Available Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm, this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  9. Comparison of ETF´s performance related to the tracking error

    Directory of Open Access Journals (Sweden)

    Michaela Dorocáková

    2017-12-01

    Full Text Available With the development of financial markets, there is also immediate expansion of fund industry, which is a representative issue of collective investment. The purpose of index funds is to replicate returns and risk of underling index to the largest possible extent, with tracking error being one of the most monitored performance indicator of these passively managed funds. The aim of this paper is to describe several perspectives concerning indexing, index funds and exchange-traded funds, to explain the issue of tracking error with its examination and subsequent comparison of such funds provided by leading investment management companies with regard to different methods used for its evaluation. Our research shows that the decisive factor for occurrence of copy deviation is fund size and fund´s stock consolidation. In addition, performance differences between exchange-traded fund and its benchmark tend to show the signs of seasonality in the sense of increasing in the last months of a year.

  10. Measurement error in a burrow index to monitor relative population size in the common vole

    Czech Academy of Sciences Publication Activity Database

    Lisická, L.; Losík, J.; Zejda, Jan; Heroldová, Marta; Nesvadbová, Jiřina; Tkadlec, Emil

    2007-01-01

    Roč. 56, č. 2 (2007), s. 169-176 ISSN 0139-7893 R&D Projects: GA ČR GA206/04/2003 Institutional research plan: CEZ:AV0Z60930519 Keywords : bias * colonisation * dispersion * Microtus arvalis * precision * sampling error Subject RIV: EH - Ecology, Behaviour Impact factor: 0.376, year: 2007 http://www.ivb.cz/folia/56/2/169-176_MS1293.pdf

  11. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  12. A Python tool to set up relative free energy calculations in GROMACS.

    Science.gov (United States)

    Klimovich, Pavel V; Mobley, David L

    2015-11-01

    Free energy calculations based on molecular dynamics (MD) simulations have seen a tremendous growth in the last decade. However, it is still difficult and tedious to set them up in an automated manner, as the majority of the present-day MD simulation packages lack that functionality. Relative free energy calculations are a particular challenge for several reasons, including the problem of finding a common substructure and mapping the transformation to be applied. Here we present a tool, alchemical-setup.py, that automatically generates all the input files needed to perform relative solvation and binding free energy calculations with the MD package GROMACS. When combined with Lead Optimization Mapper (LOMAP; Liu et al. in J Comput Aided Mol Des 27(9):755-770, 2013), recently developed in our group, alchemical-setup.py allows fully automated setup of relative free energy calculations in GROMACS. Taking a graph of the planned calculations and a mapping, both computed by LOMAP, our tool generates the topology and coordinate files needed to perform relative free energy calculations for a given set of molecules, and provides a set of simulation input parameters. The tool was validated by performing relative hydration free energy calculations for a handful of molecules from the SAMPL4 challenge (Mobley et al. in J Comput Aided Mol Des 28(4):135-150, 2014). Good agreement with previously published results and the straightforward way in which free energy calculations can be conducted make alchemical-setup.py a promising tool for automated setup of relative solvation and binding free energy calculations.

  13. Unintentional Pharmaceutical-Related Medication Errors Caused by Laypersons Reported to the Toxicological Information Centre in the Czech Republic.

    Science.gov (United States)

    Urban, Michal; Leššo, Roman; Pelclová, Daniela

    2016-07-01

    The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists. © 2016 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  14. Application of Fermat's Principle to Calculation of the Errors of Acoustic Flow-Rate Measurements for a Three-Dimensional Fluid Flow or Gas

    Science.gov (United States)

    Petrov, A. G.; Shkundin, S. Z.

    2018-01-01

    Fermat's variational principle is used for derivation of the formula for the time of propagation of a sonic signal between two set points A and B in a steady three-dimensional flow of a fluid or gas. It is shown that the fluid flow changes the time of signal reception by a value proportional to the flow rate independently of the velocity profile. The time difference in the reception of the signals from point B to point A and vice versa is proportional with a high accuracy to the flow rate. It is shown that the relative error of the formula does not exceed the square of the largest Mach number. This makes it possible to measure the flow rate of a fluid or gas with an arbitrary steady subsonic velocity field.

  15. The experimental viscosity and calculated relative viscosity of liquid In-Sn allcoys

    International Nuclear Information System (INIS)

    Wu, A.Q.; Guo, L.J.; Liu, C.S.; Jia, E.G.; Zhu, Z.G.

    2007-01-01

    The experimental measured viscosity of liquid pure Sn, In 20 Sn 80 and In 80 Sn 20 alloys was studied, and to make a comparison, the calculated relative viscosity based on the pair distribution functions, g(r), has also been studied. There is one peak in each experimental viscosity and calculated relative-viscosity curve of liquid pure Sn about 1000 deg. C. One valley appears in each experimental viscosity and calculated viscosity curve of liquid In 20 Sn 80 alloy about 700 deg. C. There is no abnormal behavior on In 80 Sn 20 alloy. The behavior of experimental viscosity and calculated relative viscosity is coincident with each other. Those results conformed that the temperature-induced structure anomalies reported before did take place

  16. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    Science.gov (United States)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  17. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  18. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    Science.gov (United States)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  19. Exploring behavioural determinants relating to health professional reporting of medication errors: a qualitative study using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-07-01

    Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.

  20. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    DEFF Research Database (Denmark)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which...... already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...

  1. Effects of exposure estimation errors on estimated exposure-response relations for PM2.5.

    Science.gov (United States)

    Cox, Louis Anthony Tony

    2018-07-01

    Associations between fine particulate matter (PM2.5) exposure concentrations and a wide variety of undesirable outcomes, from autism and auto theft to elderly mortality, suicide, and violent crime, have been widely reported. Influential articles have argued that reducing National Ambient Air Quality Standards for PM2.5 is desirable to reduce these outcomes. Yet, other studies have found that reducing black smoke and other particulate matter by as much as 70% and dozens of micrograms per cubic meter has not detectably affected all-cause mortality rates even after decades, despite strong, statistically significant positive exposure concentration-response (C-R) associations between them. This paper examines whether this disconnect between association and causation might be explained in part by ignored estimation errors in estimated exposure concentrations. We use EPA air quality monitor data from the Los Angeles area of California to examine the shapes of estimated C-R functions for PM2.5 when the true C-R functions are assumed to be step functions with well-defined response thresholds. The estimated C-R functions mistakenly show risk as smoothly increasing with concentrations even well below the response thresholds, thus incorrectly predicting substantial risk reductions from reductions in concentrations that do not affect health risks. We conclude that ignored estimation errors obscure the shapes of true C-R functions, including possible thresholds, possibly leading to unrealistic predictions of the changes in risk caused by changing exposures. Instead of estimating improvements in public health per unit reduction (e.g., per 10 µg/m 3 decrease) in average PM2.5 concentrations, it may be essential to consider how interventions change the distributions of exposure concentrations. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Enhanced error related negativity amplitude in medication-naïve, comorbidity-free obsessive compulsive disorder.

    Science.gov (United States)

    Nawani, Hema; Narayanaswamy, Janardhanan C; Basavaraju, Shrinivasa; Bose, Anushree; Mahavir Agarwal, Sri; Venkatasubramanian, Ganesan; Janardhan Reddy, Y C

    2018-04-01

    Error monitoring and response inhibition is a key cognitive deficit in obsessive-compulsive disorder (OCD). Frontal midline regions such as the cingulate cortex and pre-supplementary motor area are considered critical brain substrates of this deficit. Electrophysiological equivalent of the above dysfunction is a fronto-central event related potential (ERP) which occurs after an error called the error related negativity (ERN). In this study, we sought to compare the ERN parameters between medication-naïve, comorbidity-free subjects with OCD and healthy controls (HC). Age, sex and handedness matched subjects with medication-naïve, comorbidity-free OCD (N = 16) and Healthy Controls (N = 17) performed a modified version of the flanker task while EEG was acquired for ERN. EEG signals were recorded from the electrodes FCz and Cz. Clinical severity of OCD was assessed using the Yale Brown Obsessive Compulsive Scale. The subjects with OCD had significantly greater ERN amplitude at Cz and FCz. There were no significant correlations between ERN measures and illness severity measures. Overactive performance monitoring as evidenced by enhanced ERN amplitude could be considered as a biomarker for OCD. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  4. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model

  5. The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    NARCIS (Netherlands)

    Lugtig, Peter

    2017-01-01

    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey

  6. Evidence for specificity of the impact of punishment on error-related brain activity in high versus low trait anxious individuals.

    Science.gov (United States)

    Meyer, Alexandria; Gawlowska, Magda

    2017-10-01

    A previous study suggests that when participants were punished with a loud noise after committing errors, the error-related negativity (ERN) was enhanced in high trait anxious individuals. The current study sought to extend these findings by examining the ERN in conditions when punishment was related and unrelated to error commission as a function of individual differences in trait anxiety symptoms; further, the current study utilized an electric shock as an aversive unconditioned stimulus. Results confirmed that the ERN was increased when errors were punished among high trait anxious individuals compared to low anxious individuals; this effect was not observed when punishment was unrelated to errors. Findings suggest that the threat-value of errors may underlie the association between certain anxious traits and punishment-related increases in the ERN. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The influence of relatives on the efficiency and error rate of familial searching.

    Directory of Open Access Journals (Sweden)

    Rori V Rohlfs

    Full Text Available We investigate the consequences of adopting the criteria used by the state of California, as described by Myers et al. (2011, for conducting familial searches. We carried out a simulation study of randomly generated profiles of related and unrelated individuals with 13-locus CODIS genotypes and YFiler® Y-chromosome haplotypes, on which the Myers protocol for relative identification was carried out. For Y-chromosome sharing first degree relatives, the Myers protocol has a high probability (80~99% of identifying their relationship. For unrelated individuals, there is a low probability that an unrelated person in the database will be identified as a first-degree relative. For more distant Y-haplotype sharing relatives (half-siblings, first cousins, half-first cousins or second cousins there is a substantial probability that the more distant relative will be incorrectly identified as a first-degree relative. For example, there is a 3~18% probability that a first cousin will be identified as a full sibling, with the probability depending on the population background. Although the California familial search policy is likely to identify a first degree relative if his profile is in the database, and it poses little risk of falsely identifying an unrelated individual in a database as a first-degree relative, there is a substantial risk of falsely identifying a more distant Y-haplotype sharing relative in the database as a first-degree relative, with the consequence that their immediate family may become the target for further investigation. This risk falls disproportionately on those ethnic groups that are currently overrepresented in state and federal databases.

  8. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  9. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    Science.gov (United States)

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  10. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2011-01-01

    technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations

  11. EEG-based decoding of error-related brain activity in a real-world driving task

    Science.gov (United States)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  12. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  13. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    Directory of Open Access Journals (Sweden)

    Cuicui Wang

    2015-01-01

    Full Text Available This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs were used to record investor’s brain activity and capture the event-related negativity (ERN and feedback-related negativity (FRN components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.

  14. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    Science.gov (United States)

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process. PMID:26557139

  15. Increased error-related brain activity distinguishes generalized anxiety disorder with and without comorbid major depressive disorder.

    Science.gov (United States)

    Weinberg, Anna; Klein, Daniel N; Hajcak, Greg

    2012-11-01

    Generalized anxiety disorder (GAD) and major depressive disorder (MDD) are so frequently comorbid that some have suggested that the 2 should be collapsed into a single overarching "distress" disorder. Yet there is also increasing evidence that the 2 categories are not redundant. Neurobehavioral markers that differentiate GAD and MDD would be helpful in ongoing efforts to refine classification schemes based on neurobiological measures. The error-related negativity (ERN) may be one such marker. The ERN is an event-related potential component presenting as a negative deflection approximately 50 ms following an erroneous response and reflects activity of the anterior cingulate cortex. There is evidence for an enhanced ERN in individuals with GAD, but the literature in MDD is mixed. The present study measured the ERN in 26 GAD, 23 comorbid GAD and MDD, and 36 control participants, all of whom were female and medication-free. Consistent with previous research, the GAD group was characterized by a larger ERN and an increased difference between error and correct trials than controls. No such enhancement was evident in the comorbid group, suggesting comorbid depression may moderate the relationship between the ERN and anxiety. The present study further suggests that the ERN is a potentially useful neurobiological marker for future studies that consider the pathophysiology of multiple disorders in order to construct or refine neurobiologically based diagnostic phenotypes. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  16. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    Science.gov (United States)

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  17. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  18. Relative Dating and Classification of Minerals and Rocks Based on Statistical Calculations Related to Their Potential Energy Index

    OpenAIRE

    Labushev, Mikhail M.; Khokhlov, Alexander N.

    2012-01-01

    Index of proportionality of atomic weights of chemical elements is proposed for determining the relative age of minerals and rocks. Their chemical analysis results serve to be initial data for calculations. For rocks of different composition the index is considered to be classification value as well. Crystal lattice energy change in minerals and their associations can be measured by the index value change, thus contributing to the solution of important practical problems. There was determined...

  19. Relative Motion of the WDS 05110+3203 STF 648 System, With a Protocol for Calculating Relative Motion

    Science.gov (United States)

    Wiley, E. O.

    2010-07-01

    Relative motion studies of visual double stars can be investigated using least squares regression techniques and readily accessible programs such as Microsoft Excel and a calculator. Optical pairs differ from physical pairs under most geometries in both their simple scatter plots and their regression models. A step-by-step protocol for estimating the rectilinear elements of an optical pair is presented. The characteristics of physical pairs using these techniques are discussed.

  20. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  1. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    Directory of Open Access Journals (Sweden)

    Andrew D Lowther

    Full Text Available Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  2. Relative power density distribution calculations of the Kori unit 1 pressurized water reactor with full-scope explicit modeling of monte carlo simulation

    International Nuclear Information System (INIS)

    Kim, J. O.; Kim, J. K.

    1997-01-01

    Relative power density distributions of the Kori unit 1 pressurized water reactor calculated by Monte Carlo modeling with the MCNP code. The Kori unit 1 core is modeled on a three-dimensional representation of the one-eighth of the reactor in-vessel component with reflective boundaries at 0 and 45 degrees. The axial core model is based on half core symmetry and is divided into four axial segments. Fission reaction density in each rod is calculated by following 100 cycles with 5,000 test neutrons in each cycle after starting with a localized neutron source and ten noncontributing settle cycles. Relative assembly power distributions are calculated from fission reaction densities of rods in assembly. After 100 cycle calculations, the system coverages to a κ value of 1.00039 ≥ 0.00084. Relative assembly power distribution is nearly the same with that of the Kori unit 1 FSAR. Applicability of the full-scope Monte Carlo simulation in the power distribution calculation is examined by the relative root mean square error of 2.159%. (author)

  3. Self-healing diffusion quantum Monte Carlo algorithms: methods for direct reduction of the fermion sign error in electronic structure calculations

    International Nuclear Information System (INIS)

    Reboredo, F.A.; Hood, R.Q.; Kent, P.C.

    2009-01-01

    We develop a formalism and present an algorithm for optimization of the trial wave-function used in fixed-node diffusion quantum Monte Carlo (DMC) methods. The formalism is based on the DMC mixed estimator of the ground state probability density. We take advantage of a basic property of the walker configuration distribution generated in a DMC calculation, to (i) project-out a multi-determinant expansion of the fixed node ground state wave function and (ii) to define a cost function that relates the interacting-ground-state-fixed-node and the non-interacting trial wave functions. We show that (a) locally smoothing out the kink of the fixed-node ground-state wave function at the node generates a new trial wave function with better nodal structure and (b) we argue that the noise in the fixed-node wave function resulting from finite sampling plays a beneficial role, allowing the nodes to adjust towards the ones of the exact many-body ground state in a simulated annealing-like process. Based on these principles, we propose a method to improve both single determinant and multi-determinant expansions of the trial wave function. The method can be generalized to other wave function forms such as pfaffians. We test the method in a model system where benchmark configuration interaction calculations can be performed and most components of the Hamiltonian are evaluated analytically. Comparing the DMC calculations with the exact solutions, we find that the trial wave function is systematically improved. The overlap of the optimized trial wave function and the exact ground state converges to 100% even starting from wave functions orthogonal to the exact ground state. Similarly, the DMC total energy and density converges to the exact solutions for the model. In the optimization process we find an optimal non-interacting nodal potential of density-functional-like form whose existence was predicted in a previous publication (Phys. Rev. B 77 245110 (2008)). Tests of the method are

  4. A relation between calculated human body exergy consumption rate and subjectively assessed thermal sensation

    DEFF Research Database (Denmark)

    Simone, Angela; Kolarik, Jakub; Iwamatsu, Toshiya

    2011-01-01

    occupants, it is reasonable to consider both the exergy flows in building and those within the human body. Until now, no data have been available on the relation between human-body exergy consumption rates and subjectively assessed thermal sensation. The objective of the present work was to relate thermal...... sensation data, from earlier thermal comfort studies, to calculated human-body exergy consumption rates. The results show that the minimum human body exergy consumption rate is associated with thermal sensation votes close to thermal neutrality, tending to the slightly cool side of thermal sensation....... Generally, the relationship between air temperature and the exergy consumption rate, as a first approximation, shows an increasing trend. Taking account of both convective and radiative heat exchange between the human body and the surrounding environment by using the calculated operative temperature, exergy...

  5. Calculation of the Green functions by the coupling constant dispersion relations

    International Nuclear Information System (INIS)

    Bogomalny, E.B.

    1977-01-01

    The discontinuities of the Green functions on the cut in the complex plane of the coupling constant are calculated by the steepest descent method. The saddle points are given by the solutions of the classical field equations at those values of the coupling constant for which the classical theory has no ground state. The Green functions at the physical values of the coupling constant are determined by dispersion relations. (Auth.)

  6. Error related negativity and multi-source interference task in children with attention deficit hyperactivity disorder-combined type

    Directory of Open Access Journals (Sweden)

    Rosana Huerta-Albarrán

    2015-03-01

    Full Text Available Objective To compare performance of children with attention deficit hyperactivity disorders-combined (ADHD-C type with control children in multi-source interference task (MSIT evaluated by means of error related negativity (ERN. Method We studied 12 children with ADHD-C type with a median age of 7 years, control children were age- and gender-matched. Children performed MSIT and simultaneous recording of ERN. Results We found no differences in MSIT parameters among groups. We found no differences in ERN variables between groups. We found a significant association of ERN amplitude with MSIT in children with ADHD-C type. Some correlation went in positive direction (frequency of hits and MSIT amplitude, and others in negative direction (frequency of errors and RT in MSIT. Conclusion Children with ADHD-C type exhibited a significant association between ERN amplitude with MSIT. These results underline participation of a cingulo-fronto-parietal network and could help in the comprehension of pathophysiological mechanisms of ADHD.

  7. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  8. Effect of a health system's medical error disclosure program on gastroenterology-related claims rates and costs.

    Science.gov (United States)

    Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M

    2014-04-01

    In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.

  9. Direct calculation of self-consistent π bond orders in conjugated systems and pairing relations

    International Nuclear Information System (INIS)

    Castro, A.F.

    1982-01-01

    Pairing relations in excited states of conjugated systems which satisfy to a given symmetry with a Pariser-Parr-Pople-like (PPP) calculation are studied. Six π - electron systems are considered having a symmetry axis which does not cross π centers following a treatment which permits the direct obtainment of the bond order matrix based on Hall's method. Pairing relations are looked for, too, using particular solutions when U(3) groups is applied. Pyridazine molecules are used in order to test the results. (L.C.) [pt

  10. Accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes and pure theoretical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Jinhua; Fu, Qingshan; Xue, Yongqiang, E-mail: xyqlw@126.com; Cui, Zixiang

    2017-05-01

    Based on the surface pre-melting model, accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes (tetrahedron, cube, octahedron, dodecahedron, icosahedron, nanowire) were derived. The theoretically calculated melting temperatures are in relative good agreements with experimental, molecular dynamic simulation and other theoretical results for nanometer Au, Ag, Al, In and Pb. It is found that the particle size and shape have notable effects on the melting temperature of nanocrystals, and the smaller the particle size, the greater the effect of shape. Furthermore, at the same equivalent radius, the more the shape deviates from sphere, the lower the melting temperature is. The value of melting temperature depression of cylindrical nanowire is just half of that of spherical nanoparticle with an identical radius. The theoretical relations enable one to quantitatively describe the influence regularities of size and shape on the melting temperature and to provide an effective way to predict and interpret the melting temperature of nanocrystals with different sizes and shapes. - Highlights: • Accurate relations of T{sub m} of nanocrystals with various shapes are derived. • Calculated T{sub m} agree with literature results for nano Au, Ag, Al, In and Pb. • ΔT{sub m} (nanowire) = 0.5ΔT{sub m} (spherical nanocrystal). • The relations apply to predict and interpret the melting behaviors of nanocrystals.

  11. Thermal conductivity calculation of nano-suspensions using Green–Kubo relations with reduced artificial correlations

    International Nuclear Information System (INIS)

    Muraleedharan, Murali Gopal; Yang, Vigor; Sundaram, Dilip Srinivas; Henry, Asegun

    2017-01-01

    The presence of artificial correlations associated with Green–Kubo (GK) thermal conductivity calculations is investigated. The thermal conductivity of nano-suspensions is calculated by equilibrium molecular dynamics (EMD) simulations using GK relations. Calculations are first performed for a single alumina (Al 2 O 3 ) nanoparticle dispersed in a water medium. For a particle size of 1 nm and volume fraction of 9%, results show enhancements as high as 235%, which is much higher than the Maxwell model predictions. When calculations are done with multiple suspended particles, no such anomalous enhancement is observed. This is because the vibrations in alumina crystal can act as low frequency perturbations, which can travel long distances through the surrounding water medium, characterized by higher vibration frequencies. As a result of the periodic boundaries, they re-enter the system resulting in a circular resonance of thermal fluctuations between the alumina particle and its own image, eventually leading to artificial correlations in the heat current autocorrelation function (HCACF), which when integrated yields abnormally high thermal conductivities. Adding more particles presents ‘obstacles’ with which the fluctuations interact and get dissipated, before they get fed back to the periodic image. A systematic study of the temporal evolution of HCACF indicates that the magnitude and oscillations of artificial correlations decrease substantially with increase in the number of suspended nanoparticles. (paper)

  12. Stormwater Management: Calculation of Traffic Area Runoff Loads and Traffic Related Emissions

    Directory of Open Access Journals (Sweden)

    Maximilian Huber

    2016-07-01

    Full Text Available Metals such as antimony, cadmium, chromium, copper, lead, nickel, and zinc can be highly relevant pollutants in stormwater runoff from traffic areas because of their occurrence, toxicity, and non-degradability. Long-term measurements of their concentrations, the corresponding water volumes, the catchment areas, and the traffic volumes can be used to calculate specific emission loads and annual runoff loads that are necessary for mass balances. In the literature, the annual runoff loads are often specified by a distinct catchment area (e.g., g/ha. These loads were summarized and discussed in this paper for all seven metals and three types of traffic areas (highways, parking lots, and roads; 45 sites. For example, the calculated median annual runoff loads of all sites are 355 g/ha for copper, 110 g/ha for lead (only data of the 21st century, and 1960 g/ha for zinc. In addition, historical trends, annual variations, and site-specific factors were evaluated for the runoff loads. For Germany, mass balances of traffic related emissions and annual heavy metal runoff loads from highways and total traffic areas were calculated. The influences on the mass fluxes of the heavy metal emissions and the runoff pollution were discussed. However, a statistical analysis of the annual traffic related metal fluxes, in particular for different traffic area categories and land uses, is currently not possible because of a lack of monitoring data.

  13. The calculation of relative output factor and depth dose for irregular electron fields in water

    International Nuclear Information System (INIS)

    Dunscombe, Peter; McGhee, Peter; Chu, Terence

    1996-01-01

    Purpose: A technique, based on sector integration and interpolation, has been developed for the computation of both relative output factor and depth dose of irregular electron fields in water. The purpose of this study was to determine the minimum experimental data set required for the technique to yield results within accepted dosimetric tolerances. Materials and Methods: PC based software has been written to perform the calculations necessary to dosimetrically characterize irregular shaped electron fields. The field outline is entered via digitiser and the SSD and energy via the keyboard. The irregular field is segmented into sectors of specified angle (2 deg. was used for this study) and the radius of each sector computed. The central ray depth dose is reconstructed by summing the contributions from each sector deduced from calibration depth doses measured for circular fields. Relative output factors and depth doses at SSDs at which calibrations were not performed are found by interpolation. Calibration data were measured for circular fields from 2 to 9 cm diameter at 100, 105, 110, and 115 cm SSD. A clinical cut out can be characterized in less than 2 minutes including entry of the outline using this software. The performance of the technique was evaluated by comparing calculated relative output factors, surface dose and the locations of d 80 , d 50 and d 20 with experimental measurements on a variety of cut out shapes at 9 and 18 MeV. The calibration data set (derived from circular cut outs) was systematically reduced to identify the minimum required to yield an accuracy consistent with current recommendations. Results: The figure illustrates the ability of the technique to calculate the depth dose for an irregular field (shown in the insert). It was found that to achieve an accuracy of 2% in relative output factor and 2% or 2 mm (our criterion) in percentage depth dose, calibration data from five circular fields at the four SSDs spanning the range 100-115 cm

  14. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    International Nuclear Information System (INIS)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; Nørskov, Jens K.

    2017-01-01

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying these methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.

  15. Investigation of metal/carbon-related materials for fuel cell applications by electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Ki-jeong [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of)]. E-mail: kong@krict.re.kr; Choi, Youngmin [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of); Ryu, Beyong-Hwan [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of); Lee, Jeong-O [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of); Chang, Hyunju [Korea Research Institute of Chemical Technology, P.O.Box 107, Yuseong, Daejeon 305-600 (Korea, Republic of)

    2006-07-15

    The potential of carbon-related materials, such as carbon nanotubes (CNTs) and graphite nanofibers (GNFs), supported metal catalysts as an electrode for fuel cell application was investigated using the first-principle electronic structure calculations. The stable binding geometries and energies of metal catalysts are determined on the CNT surface and the GNF edge. The catalyst metal is more tightly bound to the GNF edge than to the CNT surface because of the existence of active dangling bonds of edge carbon atoms. The diffusion barrier of metal atoms on the surface and edge is also obtained. From our calculation results, we have found that high dispersity is achievable for GNF due to high barrier against the diffusion of metal atoms, while CNT appears less suitable. The GNF with a large edge-to-wall ratio is more suitable for the high-performance electrode than perfect crystalline graphite or CNT.

  16. Investigation of metal/carbon-related materials for fuel cell applications by electronic structure calculations

    International Nuclear Information System (INIS)

    Kong, Ki-jeong; Choi, Youngmin; Ryu, Beyong-Hwan; Lee, Jeong-O; Chang, Hyunju

    2006-01-01

    The potential of carbon-related materials, such as carbon nanotubes (CNTs) and graphite nanofibers (GNFs), supported metal catalysts as an electrode for fuel cell application was investigated using the first-principle electronic structure calculations. The stable binding geometries and energies of metal catalysts are determined on the CNT surface and the GNF edge. The catalyst metal is more tightly bound to the GNF edge than to the CNT surface because of the existence of active dangling bonds of edge carbon atoms. The diffusion barrier of metal atoms on the surface and edge is also obtained. From our calculation results, we have found that high dispersity is achievable for GNF due to high barrier against the diffusion of metal atoms, while CNT appears less suitable. The GNF with a large edge-to-wall ratio is more suitable for the high-performance electrode than perfect crystalline graphite or CNT

  17. Study of errors of calculation of elements of target functions of a model of future development and disposition of coal mining for coking

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, M I

    1979-01-01

    The boundaries of the coefficients of increase of volume of capital investments for a production construction (K) and net cost (C) of mining of commercial coal for the principal coal basins and the most probable values of errors of these quantities as a whole for the sample examined are obtained. Dependency of the increase of the elements of the target function of the model on increase of K and C is plotted.

  18. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    Science.gov (United States)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a

  19. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  20. Processing of action- but not stimulus-related prediction errors differs between active and observational feedback learning.

    Science.gov (United States)

    Kobza, Stefan; Bellebaum, Christian

    2015-01-01

    Learning of stimulus-response-outcome associations is driven by outcome prediction errors (PEs). Previous studies have shown larger PE-dependent activity in the striatum for learning from own as compared to observed actions and the following outcomes despite comparable learning rates. We hypothesised that this finding relates primarily to a stronger integration of action and outcome information in active learners. Using functional magnetic resonance imaging, we investigated brain activations related to action-dependent PEs, reflecting the deviation between action values and obtained outcomes, and action-independent PEs, reflecting the deviation between subjective values of response-preceding cues and obtained outcomes. To this end, 16 active and 15 observational learners engaged in a probabilistic learning card-guessing paradigm. On each trial, active learners saw one out of five cues and pressed either a left or right response button to receive feedback (monetary win or loss). Each observational learner observed exactly those cues, responses and outcomes of one active learner. Learning performance was assessed in active test trials without feedback and did not differ between groups. For both types of PEs, activations were found in the globus pallidus, putamen, cerebellum, and insula in active learners. However, only for action-dependent PEs, activations in these structures and the anterior cingulate were increased in active relative to observational learners. Thus, PE-related activity in the reward system is not generally enhanced in active relative to observational learning but only for action-dependent PEs. For the cerebellum, additional activations were found across groups for cue-related uncertainty, thereby emphasising the cerebellum's role in stimulus-outcome learning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Considerations and calculations on the breakup of jets and drops of melt related to premixing

    Energy Technology Data Exchange (ETDEWEB)

    Buerger, M.; Berg, E. von; Buck, M. [Inst. fuer Kernenergetik und Energiesysteme (IKE), Univ. of Stuttgart, Pfaffenwaldring 31, 70569 Stuttgart (Germany)

    1998-01-01

    Various descriptions of jet and drop breakup are applied in premixing codes, presently. The main task is to check these descriptions over a wide range of conditions in order to assure extrapolation capabilities for the codes. Jet breakup under non-boiling conditions is relatively well described by IKEJET, based on Conte/Miles (CM) instability description and a relatively detailed stripping model, in contrast to using Kelvin/Helmholtz (KH) theory. Remaining open questions are elaborated. Especially, thick jet behavior with dominance of stripping even at small relative velocities must be distinguished from thin jets with coarse breakup. The application of IKEJET to cases with jet breakup under strong film boiling yielded significantly too little fragmentation. As a possible explanation line, multiphase effects on the wave growth and stripping are considered, due to entrainment of melt and water. Parametric checking calculations are performed with a strongly simplified approach for PREMIX and FARO experiments in order to reveal main effects and the possible physical explanation features as a basis for extended modelling. The results indicate that jet breakup may be essentially sufficient to explain the experimental behavior. Rather coalescence than further drop breakup may be expected. This is also indicated by calculations with IKE drop breakup models. (author)

  2. A relation between calculated human body exergy consumption rate and subjectively assessed thermal sensation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, Angela; Kolarik, Jakub; Olesen, Bjarne W. [ICIEE/BYG, Technical University of Denmark (Denmark); Iwamatsu, Toshiya [Faculty of Urban Environmental Science, Tokyo Metropolitan University (Japan); Asada, Hideo [Architech Consulting Co., Tokyo (Japan); Dovjak, Mateja [Faculty of Civil and Geodetic Engineering, University of Ljubljana (Slovenia); Schellen, Lisje [Eindhoven University of Technology, Faculty of Architecture Building and Planning (Netherlands); Shukuya, Masanori [Laboratory of Building Environment, Tokyo City University, Yokohama (Japan)

    2011-01-15

    Application of the exergy concept to research on the built environment is a relatively new approach. It helps to optimize climate conditioning systems so that they meet the requirements of sustainable building design. As the building should provide a healthy and comfortable environment for its occupants, it is reasonable to consider both the exergy flows in building and those within the human body. Until now, no data have been available on the relation between human-body exergy consumption rates and subjectively assessed thermal sensation. The objective of the present work was to relate thermal sensation data, from earlier thermal comfort studies, to calculated human-body exergy consumption rates. The results show that the minimum human body exergy consumption rate is associated with thermal sensation votes close to thermal neutrality, tending to the slightly cool side of thermal sensation. Generally, the relationship between air temperature and the exergy consumption rate, as a first approximation, shows an increasing trend. Taking account of both convective and radiative heat exchange between the human body and the surrounding environment by using the calculated operative temperature, exergy consumption rates increase as the operative temperature increases above 24 C or decreases below 22 C. With the data available so far, a second-order polynomial relationship between thermal sensation and the exergy consumption rate was established. (author)

  3. Relative Binding Free Energy Calculations in Drug Discovery: Recent Advances and Practical Considerations.

    Science.gov (United States)

    Cournia, Zoe; Allen, Bryce; Sherman, Woody

    2017-12-26

    Accurate in silico prediction of protein-ligand binding affinities has been a primary objective of structure-based drug design for decades due to the putative value it would bring to the drug discovery process. However, computational methods have historically failed to deliver value in real-world drug discovery applications due to a variety of scientific, technical, and practical challenges. Recently, a family of approaches commonly referred to as relative binding free energy (RBFE) calculations, which rely on physics-based molecular simulations and statistical mechanics, have shown promise in reliably generating accurate predictions in the context of drug discovery projects. This advance arises from accumulating developments in the underlying scientific methods (decades of research on force fields and sampling algorithms) coupled with vast increases in computational resources (graphics processing units and cloud infrastructures). Mounting evidence from retrospective validation studies, blind challenge predictions, and prospective applications suggests that RBFE simulations can now predict the affinity differences for congeneric ligands with sufficient accuracy and throughput to deliver considerable value in hit-to-lead and lead optimization efforts. Here, we present an overview of current RBFE implementations, highlighting recent advances and remaining challenges, along with examples that emphasize practical considerations for obtaining reliable RBFE results. We focus specifically on relative binding free energies because the calculations are less computationally intensive than absolute binding free energy (ABFE) calculations and map directly onto the hit-to-lead and lead optimization processes, where the prediction of relative binding energies between a reference molecule and new ideas (virtual molecules) can be used to prioritize molecules for synthesis. We describe the critical aspects of running RBFE calculations, from both theoretical and applied perspectives

  4. Information Management System Development for the Characterization and Analysis of Human Error in Naval Aviation Maintenance Related Mishaps

    National Research Council Canada - National Science Library

    Wood, Brian

    2000-01-01

    .... The Human Factors Analysis and Classification System-Maintenance Extension taxonomy, an effective framework for classifying and analyzing the presence of maintenance errors that lead to mishaps...

  5. Throughput Estimation Method in Burst ACK Scheme for Optimizing Frame Size and Burst Frame Number Appropriate to SNR-Related Error Rate

    Science.gov (United States)

    Ohteru, Shoko; Kishine, Keiji

    The Burst ACK scheme enhances effective throughput by reducing ACK overhead when a transmitter sends sequentially multiple data frames to a destination. IEEE 802.11e is one such example. The size of the data frame body and the number of burst data frames are important burst transmission parameters that affect throughput. The larger the burst transmission parameters are, the better the throughput under error-free conditions becomes. However, large data frame could reduce throughput under error-prone conditions caused by signal-to-noise ratio (SNR) deterioration. If the throughput can be calculated from the burst transmission parameters and error rate, the appropriate ranges of the burst transmission parameters could be narrowed down, and the necessary buffer size for storing transmit data or received data temporarily could be estimated. In this paper, we present a method that features a simple algorithm for estimating the effective throughput from the burst transmission parameters and error rate. The calculated throughput values agree well with the measured ones for actual wireless boards based on the IEEE 802.11-based original MAC protocol. We also calculate throughput values for larger values of the burst transmission parameters outside the assignable values of the wireless boards and find the appropriate values of the burst transmission parameters.

  6. Large-scale compensation of errors in pairwise-additive empirical force fields: comparison of AMBER intermolecular terms with rigorous DFT-SAPT calculations

    Czech Academy of Sciences Publication Activity Database

    Zgarbová, M.; Otyepka, M.; Šponer, Jiří; Hobza, P.; Jurečka, P.

    2010-01-01

    Roč. 12, č. 35 (2010), s. 10476-10493 ISSN 1463-9076 R&D Projects: GA ČR(CZ) GA203/09/1476 Grant - others:GA MŠk(CZ) LC512; GA MŠk(CZ) GD203/09/H046 Program:LC; GD Institutional research plan: CEZ:AV0Z50040507; CEZ:AV0Z50040702 Keywords : amber empirical potential * DFT-SAPT * compensation of errors Subject RIV: BO - Biophysics Impact factor: 3.454, year: 2010

  7. Some features of excited states density matrix calculation and their pairing relations in conjugated systems

    International Nuclear Information System (INIS)

    Giambiagi, M.S. de; Giambiagi, M.

    1982-01-01

    Direct PPP-type calculations of self-consistent (SC) density matrices for excited states are described and the corresponding 'thawn' molecular orbitals (MO) are discussed. Special attention is addressed to particular solutions arising in conjugated systems of a certain symmetry, and to their chemical implications. The U(2) and U(3) algebras are applied respectively to the 4-electron and 6-electron cases: a natural separation of excited states in different cases follows. A simple approach to the convergence problem for excited states is given. The complementarity relations, an alternative formulation of the pairing theorem valid for heteromolecules and non-alternant systems, allow some fruitful experimental applications. Together with the extended pairing relations shown here, they may help to rationalize general trends. (Author) [pt

  8. DFT/GIAO calculations of the relative contributions of hyperconjugation to the chemical shifts of ethanol

    International Nuclear Information System (INIS)

    Carneiro, J. Walkimar de M.; Dias, Jacques F.; Seidl, Peter R.; Tostes, J. Glauco R.

    2002-01-01

    Our previous DFT/GIAO calculations on different types of alcohols reveal that the rotation of the hydroxyl group can affect the chemical shift of carbons and hydrogens close to the substituent in different ways. Besides the steric and electrostatic effects that have been widely studied, hyperconjugation with the lone pairs on oxygen of the hydroxyl group leads to changes in bond lengths and angles as well as to different charge distributions. As all three of these factors also affect chemical shifts, we undertook a systematic investigation of their relative contributions to the chemical shifts of ethanol, a molecule in which there is minimum interference among these factors. Calculations by the B3LYP method at the 6-31G(d) level for ethanol conformers corresponding to a rotation around the carbon-oxygen bond at 30 dec increments are used to show how relative contributions vary with the dihedral angle formed between the carbon-carbon and oxygen-hydrogen bonds (C-C-O-H). Largest contributions to carbon chemical shifts can be attributed to changes in bond lengths while for hydrogen chemical shifts also contribute significantly differences in charge distribution. (author)

  9. Calculation of prevalence estimates through differential equations: application to stroke-related disability.

    Science.gov (United States)

    Mar, Javier; Sainz-Ezkerra, María; Moler-Cuiral, Jose Antonio

    2008-01-01

    Neurological diseases now make up 6.3% of the global burden of disease mainly because they cause disability. To assess disability, prevalence estimates are needed. The objective of this study is to apply a method based on differential equations to calculate the prevalence of stroke-related disability. On the basis of a flow diagram, a set of differential equations for each age group was constructed. The linear system was solved analytically and numerically. The parameters of the system were obtained from the literature. The model was validated and calibrated by comparison with previous results. The stroke prevalence rate per 100,000 men was 828, and the rate for stroke-related disability was 331. The rates steadily rose with age, but the group between the ages of 65 and 74 years had the highest total number of individuals. Differential equations are useful to represent the natural history of neurological diseases and to make possible the calculation of the prevalence for the various states of disability. In our experience, when compared with the results obtained by Markov models, the benefit of the continuous use of time outweighs the mathematical requirements of our model. (c) 2008 S. Karger AG, Basel.

  10. Calculation of the relative efficiency of thermoluminescent detectors to space radiation

    International Nuclear Information System (INIS)

    Bilski, P.

    2011-01-01

    Thermoluminescent (TL) detectors are often used for measurements of radiation doses in space. While space radiation is composed of a mixture of heavy charged particles, the relative TL efficiency depends on ionization density. The question therefore arises: what is the relative efficiency of TLDs to the radiation present in space? In the attempt to answer this question, the relative TL efficiency of two types of lithium fluoride detectors for space radiation has been calculated, based on the theoretical space spectra and the experimental values of TL efficiency to ion beams. The TL efficiency of LiF:Mg,Ti detectors for radiation encountered at typical low-Earth’s orbit was found to be close to unity, justifying a common application of these TLDs to space dosimetry. The TL efficiency of LiF:Mg,Cu,P detectors is significantly lower. It was found that a shielding may have a significant influence on the relative response of TLDs, due to changes caused in the radiation spectrum. In case of application of TLDs outside the Earth’s magnetosphere, one should expect lower relative efficiency than at the low-Earth’s orbit.

  11. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  12. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  13. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  14. Method of Relative Magnitudes for Calculating Magnetic Fluxes in Electrical Machine

    Directory of Open Access Journals (Sweden)

    Oleg A.

    2018-03-01

    Full Text Available Introduction: The article presents the study results of the model of an asynchronous electric motor carried out by the author within the framework of the Priorities Research Program “Research and development in the priority areas of development of Russia’s scientific and technical complex for 2014–2020”. Materials and Methods: A model of an idealized asynchronous machine (with sinusoidal distribution of magnetic induction in air gap is used in vector control systems. It is impossible to create windings for this machine. The basis of the new calculation approach was the Conductivity of Teeth Contours Method, developed at the Electrical Machines Chair of the Moscow Power Engineering Institute (MPEI. Unlike this method, the author used not absolute values, but relative magnitudes of magnetic fluxes. This solution fundamentally improved the method’s capabilities. The relative magnitudes of the magnetic fluxes of the teeth contours do not required the additional consideration for exact structure of magnetic field of tooth and adjacent slots. These structures are identical for all the teeth of the machine and differ only in magnitude. The purpose of the calculations was not traditional harmonic analysis of magnetic induction distribution in air gap of machine, but a refinement of the equations of electric machine model. The vector control researchers used only the cos(θ function as a value of mutual magnetic coupling coefficient between the windings. Results: The author has developed a way to take into account the design of the windings of a real machine by using imaginary measuring winding with the same winding design as a real phase winding. The imaginary winding can be placed in the position of any machine windings. The calculation of the relative magnetic fluxes of this winding helped to estimate the real values of the magnetic coupling coefficients between the windings, and find the correction functions for the model of an idealized

  15. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  16. Self-reported and observed punitive parenting prospectively predicts increased error-related brain activity in six-year-old children

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.

    2017-01-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to

  17. The Relative Importance of Random Error and Observation Frequency in Detecting Trends in Upper Tropospheric Water Vapor

    Science.gov (United States)

    Whiteman, David N.; Vermeesch, Kevin C.; Oman, Luke D.; Weatherhead, Elizabeth C.

    2011-01-01

    Recent published work assessed the amount of time to detect trends in atmospheric water vapor over the coming century. We address the same question and conclude that under the most optimistic scenarios and assuming perfect data (i.e., observations with no measurement uncertainty) the time to detect trends will be at least 12 years at approximately 200 hPa in the upper troposphere. Our times to detect trends are therefore shorter than those recently reported and this difference is affected by data sources used, method of processing the data, geographic location and pressure level in the atmosphere where the analyses were performed. We then consider the question of how instrumental uncertainty plays into the assessment of time to detect trends. We conclude that due to the high natural variability in atmospheric water vapor, the amount of time to detect trends in the upper troposphere is relatively insensitive to instrumental random uncertainty and that it is much more important to increase the frequency of measurement than to decrease the random error in the measurement. This is put in the context of international networks such as the Global Climate Observing System (GCOS) Reference Upper-Air Network (GRUAN) and the Network for the Detection of Atmospheric Composition Change (NDACC) that are tasked with developing time series of climate quality water vapor data.

  18. CORRELATION OF FUNDUS CHANGES IN RELATION TO REFRACTIVE ERROR IN PATIENTS WITH MYOPIA- A CLINICAL PROSPECTIVE STUDY

    Directory of Open Access Journals (Sweden)

    Balasubramanian M. Manickavelu

    2018-01-01

    Full Text Available BACKGROUND Retina is unique among the complex element of the central nervous system and the special senses. It may be readily viewed during life and it is sufficiently transparent, so that alterations within and adjacent to it may be observed in vivo. The peripheral retina owing to its thinness comparing to that of the central part, poorly-developed retinal cells, absence of large blood vessels, relatively insensitive to light, less resistance to traction, forms a seat for various lesions, which are potentially dangerous for the vision. It is in myopia that we meet the most frequent and the most obvious anomalies in the fundus changes, which bear some relation to the degree of myopia and appeal to be concerned with it either as a cause or effect or perhaps both. The aim of our study is to correlate fundus changes in relation to refractive error in patients with myopia. MATERIALS AND METHODS In our study, 100 cases of myopic (-6D:50 cases patients were selected. Detailed evaluation done. History of refractive error includes duration, age at which spectacles were worn for the first time. Time of last change of spectacles, family history of myopia, history of other symptoms like progressive loss of vision, defective vision related to day or night, sudden loss of vision, flashes and floaters. Anterior segment was examined followed by the recording of initial visual acuity and the best corrected visual acuity was noted. IOP was measured for all the cases using Schiotz tonometry. Axial length was measured in all the cases. Fundus examined with direct ophthalmoscope, indirect ophthalmoscope, 3 mirror and 90D lens. Bscan was done in few cases. The media, disc, vessels, macula and the surrounding retina were examined. The periphery was examined with indentation method. The various fundus features and pathological lesions in different degrees of myopia were noted. RESULTS Females were comparatively more affected. Highest incidence was seen in the younger

  19. Patient safety incident reports related to traditional Japanese Kampo medicines: medication errors and adverse drug events in a university hospital for a ten-year period.

    Science.gov (United States)

    Shimada, Yutaka; Fujimoto, Makoto; Nogami, Tatsuya; Watari, Hidetoshi; Kitahara, Hideyuki; Misawa, Hiroki; Kimbara, Yoshiyuki

    2017-12-21

    Kampo medicine is traditional Japanese medicine, which originated in ancient traditional Chinese medicine, but was introduced and developed uniquely in Japan. Today, Kampo medicines are integrated into the Japanese national health care system. Incident reporting systems are currently being widely used to collect information about patient safety incidents that occur in hospitals. However, no investigations have been conducted regarding patient safety incident reports related to Kampo medicines. The aim of this study was to survey and analyse incident reports related to Kampo medicines in a Japanese university hospital to improve future patient safety. We selected incident reports related to Kampo medicines filed in Toyama University Hospital from May 2007 to April 2017, and investigated them in terms of medication errors and adverse drug events. Out of 21,324 total incident reports filed in the 10-year survey period, we discovered 108 Kampo medicine-related incident reports. However, five cases were redundantly reported; thus, the number of actual incidents was 103. Of those, 99 incidents were classified as medication errors (77 administration errors, 15 dispensing errors, and 7 prescribing errors), and four were adverse drug events, namely Kampo medicine-induced interstitial pneumonia. The Kampo medicine (crude drug) that was thought to induce interstitial pneumonia in all four cases was Scutellariae Radix, which is consistent with past reports. According to the incident severity classification system recommended by the National University Hospital Council of Japan, of the 99 medication errors, 10 incidents were classified as level 0 (an error occurred, but the patient was not affected) and 89 incidents were level 1 (an error occurred that affected the patient, but did not cause harm). Of the four adverse drug events, two incidents were classified as level 2 (patient was transiently harmed, but required no treatment), and two incidents were level 3b (patient was

  20. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    Energy Technology Data Exchange (ETDEWEB)

    Lenormand, R.; Thiele, M.R. [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  1. Performance monitoring in the anterior cingulate is not all error related: expectancy deviation and the representation of action-outcome associations.

    Science.gov (United States)

    Oliveira, Flavio T P; McDonald, John J; Goodman, David

    2007-12-01

    Several converging lines of evidence suggest that the anterior cingulate cortex (ACC) is selectively involved in error detection or evaluation of poor performance. Here we challenge this notion by presenting event-related potential (ERP) evidence that the feedback-elicited error-related negativity, an ERP component attributed to the ACC, can be elicited by positive feedback when a person is expecting negative feedback and vice versa. These results suggest that performance monitoring in the ACC is not limited to error processing. We propose that the ACC acts as part of a more general performance-monitoring system that is activated by violations in expectancy. Further, we propose that the common observation of increased ACC activity elicited by negative events could be explained by an overoptimistic bias in generating expectations of performance. These results could shed light into neurobehavioral disorders, such as depression and mania, associated with alterations in performance monitoring and also in judgments of self-related events.

  2. Case-related factors affecting cutting errors of the proximal tibia in total knee arthroplasty assessed by computer navigation.

    Science.gov (United States)

    Tsukeoka, Tadashi; Tsuneizumi, Yoshikazu; Yoshino, Kensuke; Suzuki, Mashiko

    2018-05-01

    The aim of this study was to determine factors that contribute to bone cutting errors of conventional instrumentation for tibial resection in total knee arthroplasty (TKA) as assessed by an image-free navigation system. The hypothesis is that preoperative varus alignment is a significant contributory factor to tibial bone cutting errors. This was a prospective study of a consecutive series of 72 TKAs. The amount of the tibial first-cut errors with reference to the planned cutting plane in both coronal and sagittal planes was measured by an image-free computer navigation system. Multiple regression models were developed with the amount of tibial cutting error in the coronal and sagittal planes as dependent variables and sex, age, disease, height, body mass index, preoperative alignment, patellar height (Insall-Salvati ratio) and preoperative flexion angle as independent variables. Multiple regression analysis showed that sex (male gender) (R = 0.25 p = 0.047) and preoperative varus alignment (R = 0.42, p = 0.001) were positively associated with varus tibial cutting errors in the coronal plane. In the sagittal plane, none of the independent variables was significant. When performing TKA in varus deformity, careful confirmation of the bone cutting surface should be performed to avoid varus alignment. The results of this study suggest technical considerations that can help a surgeon achieve more accurate component placement. IV.

  3. Characteristics of patients making serious inhaler errors with a dry powder inhaler and association with asthma-related events in a primary care setting

    Science.gov (United States)

    Westerik, Janine A. M.; Carter, Victoria; Chrystyn, Henry; Burden, Anne; Thompson, Samantha L.; Ryan, Dermot; Gruffydd-Jones, Kevin; Haughney, John; Roche, Nicolas; Lavorini, Federico; Papi, Alberto; Infantino, Antonio; Roman-Rodriguez, Miguel; Bosnic-Anticevich, Sinthia; Lisspers, Karin; Ställberg, Björn; Henrichsen, Svein Høegh; van der Molen, Thys; Hutton, Catherine; Price, David B.

    2016-01-01

    Abstract Objective: Correct inhaler technique is central to effective delivery of asthma therapy. The study aim was to identify factors associated with serious inhaler technique errors and their prevalence among primary care patients with asthma using the Diskus dry powder inhaler (DPI). Methods: This was a historical, multinational, cross-sectional study (2011–2013) using the iHARP database, an international initiative that includes patient- and healthcare provider-reported questionnaires from eight countries. Patients with asthma were observed for serious inhaler errors by trained healthcare providers as predefined by the iHARP steering committee. Multivariable logistic regression, stepwise reduced, was used to identify clinical characteristics and asthma-related outcomes associated with ≥1 serious errors. Results: Of 3681 patients with asthma, 623 (17%) were using a Diskus (mean [SD] age, 51 [14]; 61% women). A total of 341 (55%) patients made ≥1 serious errors. The most common errors were the failure to exhale before inhalation, insufficient breath-hold at the end of inhalation, and inhalation that was not forceful from the start. Factors significantly associated with ≥1 serious errors included asthma-related hospitalization the previous year (odds ratio [OR] 2.07; 95% confidence interval [CI], 1.26–3.40); obesity (OR 1.75; 1.17–2.63); poor asthma control the previous 4 weeks (OR 1.57; 1.04–2.36); female sex (OR 1.51; 1.08–2.10); and no inhaler technique review during the previous year (OR 1.45; 1.04–2.02). Conclusions: Patients with evidence of poor asthma control should be targeted for a review of their inhaler technique even when using a device thought to have a low error rate. PMID:26810934

  4. On superactivation of one-shot quantum zero-error capacity and the related property of quantum measurements

    DEFF Research Database (Denmark)

    Shirokov, M. E.; Shulman, Tatiana

    2014-01-01

    We give a detailed description of a low-dimensional quantum channel (input dimension 4, Choi rank 3) demonstrating the symmetric form of superactivation of one-shot quantum zero-error capacity. This property means appearance of a noiseless (perfectly reversible) subchannel in the tensor square...... of a channel having no noiseless subchannels. Then we describe a quantum channel with an arbitrary given level of symmetric superactivation (including the infinite value). We also show that superactivation of one-shot quantum zero-error capacity of a channel can be reformulated in terms of quantum measurement...

  5. Calculating the acidity of silanols and related oxyacids in aqueous solution

    Science.gov (United States)

    Tossell, John A.; Sahai, Nita

    2000-12-01

    can be correlated with underbondings or local electrostatic energies for the monomers, partially explaining the success of phenomenological models in correlating surface pK a of oxides with bond-strengths. Accurate evaluation of ΔH d, gas requires calculations with larger basis sets, inclusion of electron correlation effects, and corrections for vibrational, rotational, and translational contributions. Density functional and 2nd-order Moller-Plesset results for deprotonation enthalpies match well against higher-level G2(MP2) calculations. Direct calculation of solution pK a without resorting to correlations is presently impossible by ab initio methods because of inaccurate methods to account for solvation. Inclusion of explicit water molecules around the monomer immersed in a self-consistent reaction field (SCRF) provides the most accurate absolute hydration enthalpy (ΔH hyd) values, but IPCM values for the bare acid (HA) and anion (A -) give reasonable values of ΔH hyd, A - - ΔH hyd, HA values with much smaller computational expense. Polymers of silicate are used as model systems that begin to approach solid silica, known to be much more acidic than its monomer, Si(OH) 4. Polymerization of silicate or phosphate reduces their gas-phase ΔE d, gas relative to the monomers; differences in the electrostatic potential at H +, electronic relaxation and geometric relaxation energies all contribute to the effect. Internal H-bonding in the dimers results in unusually small ΔE d,gas which is partially counteracted by a reduced ΔH hyd. Accurate representation of hydration for oligomers persists as a fundamental problem in determining their solution pK a, because of the prohibitive cost involved in directly modeling interactions between many water molecules and the species of interest. Fortunately, though, the local contribution to the difference in hydration energy between the neutral polymeric acid and its anion seems to stabilize for a small number of explicit water

  6. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  7. On-ward participation of a hospital pharmacist in a Dutch intensive care unit reduces prescribing errors and related patient harm: an intervention study

    NARCIS (Netherlands)

    Klopotowska, J.E.; Kuiper, R.; van Kan, H.J.; de Pont, A.C.; Dijkgraaf, M.G.; Lie-A-Huen, L.; Vroom, M.B.; Smorenburg, S.M.

    2010-01-01

    Introduction: Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an

  8. On-ward participation of a hospital pharmacist in a Dutch intensive care unit reduces prescribing errors and related patient harm: an intervention study

    NARCIS (Netherlands)

    Klopotowska, Joanna E.; Kuiper, Rob; van Kan, Hendrikus J.; de Pont, Anne-Cornelie; Dijkgraaf, Marcel G.; Lie-A-Huen, Loraine; Vroom, Margreeth B.; Smorenburg, Susanne M.

    2010-01-01

    Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an ICU team. As

  9. Segmentation error and macular thickness measurements obtained with spectral-domain optical coherence tomography devices in neovascular age-related macular degeneration

    Directory of Open Access Journals (Sweden)

    Moosang Kim

    2013-01-01

    Full Text Available Purpose: To evaluate frequency and severity of segmentation errors of two spectral-domain optical coherence tomography (SD-OCT devices and error effect on central macular thickness (CMT measurements. Materials and Methods: Twenty-seven eyes of 25 patients with neovascular age-related macular degeneration, examined using the Cirrus HD-OCT and Spectralis HRA + OCT, were retrospectively reviewed. Macular cube 512 × 128 and 5-line raster scans were performed with the Cirrus and 512 × 25 volume scans with the Spectralis. Frequency and severity of segmentation errors were compared between scans. Results: Segmentation error frequency was 47.4% (baseline, 40.7% (1 month, 40.7% (2 months, and 48.1% (6 months for the Cirrus, and 59.3%, 62.2%, 57.8%, and 63.7%, respectively, for the Spectralis, differing significantly between devices at all examinations (P < 0.05, except at baseline. Average error score was 1.21 ± 1.65 (baseline, 0.79 ± 1.18 (1 month, 0.74 ± 1.12 (2 months, and 0.96 ± 1.11 (6 months for the Cirrus, and 1.73 ± 1.50, 1.54 ± 1.35, 1.38 ± 1.40, and 1.49 ± 1.30, respectively, for the Spectralis, differing significantly at 1 month and 2 months (P < 0.02. Automated and manual CMT measurements by the Spectralis were larger than those by the Cirrus. Conclusions: The Cirrus HD-OCT had a lower frequency and severity of segmentation error than the Spectralis HRA + OCT. SD-OCT error should be considered when evaluating retinal thickness.

  10. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  11. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  12. System calculations related to the accident at Three-Mile Island using TRAC

    International Nuclear Information System (INIS)

    Ireland, J.R.

    1980-01-01

    The Three Mile Island nuclear plant (Unit 2) was modeled using the Transient Reactor Analysis Code (TRAC-P1A) and a base case calculation, which simulated the initial part of the accident that occurred on March 28, 1979, was performed. In addition to the base case calculation, several parametric calculations were performed in which a single hypothetical change was made in the system conditions, such as assuming the high pressure injection (HPI) system operated as designed rather than as in the accident. Some of the important system parameter comparisons for the base case as well as some of the parametric case results are presented

  13. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    Science.gov (United States)

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  14. Optimal threshold of error decision related to non-uniform phase distribution QAM signals generated from MZM based on OCS

    Science.gov (United States)

    Han, Xifeng; Zhou, Wen

    2018-03-01

    Optical vector radio-frequency (RF) signal generation based on optical carrier suppression (OCS) in one Mach-Zehnder modulator (MZM) can realize frequency-doubling. In order to match the phase or amplitude of the recovered quadrature amplitude modulation (QAM) signal, phase or amplitude pre-coding is necessary in the transmitter side. The detected QAM signals usually have one non-uniform phase distribution after square-law detection at the photodiode because of the imperfect characteristics of the optical and electrical devices. We propose to use optimal threshold of error decision for non-uniform phase contribution to reduce the bit error rate (BER). By employing this scheme, the BER of 16 Gbaud (32 Gbit/s) quadrature-phase-shift-keying (QPSK) millimeter wave signal at 36 GHz is improved from 1 × 10-3 to 1 × 10-4 at - 4 . 6 dBm input power into the photodiode.

  15. Calculations on displacement damage and its related parameters for heavy ion bombardment in reactor materials

    International Nuclear Information System (INIS)

    Sone, Kazuho; Shiraishi, Kensuke

    1975-04-01

    The depth distribution of displacement damage expressed in displacements per atom (DPA) in reactor materials such as Mo, Nb, V, Fe and Ni bombarded by energetic nitrogen, argon and self ions with incident energy below 2 MeV was calculated following the theory developed by Lindhard and co-workers for the partition of energy as an energetic ion slowing down. In this calculation, energy loss due to electron excitation was taken into account for the atomic collision cascade after the primary knock-on process. Some parameters indispensable for the calculation such as energy loss rate, damage efficiency, projected range and its straggling were tabulated as a function of incident ion energy of 20 keV to 2 MeV. The damage and parameters were also calculated for 2 MeV nickel ions bombarding Fe targets. In this case, the DPA value is of 40--75% overestimated in a calculation disregarding electronic energy loss for primary knock-on atoms. The formula proposed in this report is significant for calculations on displacement damage produced by heavy ion bombardment as a simulation of high fluence fast neutron damage. (auth.)

  16. Calculations on displacement damage and its related parameters for heavy ion bombardment in reactor materials

    Energy Technology Data Exchange (ETDEWEB)

    Sone, K; Shiraishi, K

    1975-04-01

    The depth distribution of displacement damage expressed in displacements per atom (DPA) in reactor materials such as Mo, Nb, V, Fe and Ni bombarded by energetic nitrogen, argon and self ions with incident energy below 2 MeV was calculated following the theory developed by Lindhard and co-workers for the partition of energy as an energetic ion slowing down. In this calculation, energy loss due to electron excitation was taken into account for the atomic collision cascade after the primary knock-on process. Some parameters indispensable for the calculation such as energy loss rate, damage efficiency, projected range and its straggling were tabulated as a function of incident ion energy of 20 keV to 2 MeV. The damage and parameters were also calculated for 2 MeV nickel ions bombarding Fe targets. In this case, the DPA value is of 40--75% overestimated in a calculation disregarding electronic energy loss for primary knock-on atoms. The formula proposed in this report is significant for calculations on displacement damage produced by heavy ion bombardment as a simulation of high fluence fast neutron damage.

  17. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  18. Precise calculation of the transmission coefficient of a potential barrier. Study of the error in the B K W approximation; Calcul exact du coefficient de transmission d'une barriere de potentiel. Etude de l'erreur de l'approximation B K W

    Energy Technology Data Exchange (ETDEWEB)

    Jamet, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1964-07-01

    Following on to work started in a previous report, the author carries out in the case of few examples, the calculation of the transmission coefficient T using accurate methods. He then deduces from this the error in the B K W method. The calculations are carried out for values of T ranging down to 10{sup -200}. The use of modern computers makes it possible to obtain values of T to eight decimal places in a few seconds and the practical advantage of the B K W approximation appears therefore considerably reduced. The author gives also a method which may be used for an exact calculation of the energy levels of a potential well. (author) [French] Poursuivant une etude commencee dans une note anterieure, l'auteur effectue, sur quelques exemples, le calcul du coefficient de transmission T par des methodes exactes. Il en deduit ensuite l'erreur de la methode B K W. Les calculs sont faits pour des valeurs de T allant jusqu'a 10{sup -200}. L'utilisation des machines a calculer modernes permettant d'obtenir en quelques secondes, la valeur de T avec 8 decimales exactes, l'interet pratique de l'approximation B K W semble considerablement diminue. L'auteur indique egalement une methode qui peut servir a calculer exactement les niveaux d'energie d'un puits de potentiel. (auteur)

  19. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  20. Calculated Phase Relations in the System KFMASH Between 6 and 16 GPa

    Science.gov (United States)

    Massonne, H.; Brandelik, A.

    2005-12-01

    To better understand the modal compositions of deeply buried metagranitoids and metapelites, phase relations in the model system K2O-FeO-MgO-Al2O3-SiO2-H2O (KFMASH) with SiO2 in excess were calculated applying thermodynamic principles. We used the software package PTGIBBS, published data, and thermodynamic data (e.g. for phase egg (AlSiO3OH), K-hollandite (KAlSi3O8)) newly derived on the basis of former high-pressure (HP) experiments. Non-ideal mixing was considered for the solid solution series of garnet (components: pyrope, majorite, almandine) and potassic white mica (components: muscovite, MgAl-celadonite, FeAl-celadonite). For phases such as HP-clinoenstatite ((Mg,Fe)SiO3), Si-spinel ((Fe,Mg)2SiO4), and beta phase ((Mg,Fe)2SiO4) only binary solid solutions, assuming ideal mixing, were taken into account. On the basis of the above data, we constructed petrogenetic grids mainly for the P-T range 6 to 16 GPa and 600 to 1600 ° C. Typical features of these grids are, for instance, the disappearance of K-cymrite (KAlSi3O8 H2O) with rising pressure close to 10 GPa and the occurrence of phase egg above 12 GPa. In KMASH potassic white mica reacts with OH-topaz at about 11 GPa (1000-1200 ° C) to form pyrope + K-hollandite. The content of majorite component in pyrope is less than 1 mol% which is systematically so for all garnets coexisting with an Al-silicate at least up to 16 GPa. Potassic white mica, which is virtually pure MgAl-celadonite, finally breaks down at pressures close to 12 GPa. Decomposition assemblages are K-hollandite + HP-clinoenstatite + H2O (T free) garnet + Al-silicate. The latter phase is either OH-topaz (Al2SiO4(OH)2) or phase egg or kyanite also depending on the availability of H2O. Metagranitoids should be composed of shishovite + K-hollandite + majorite-bearing garnet + (enstatite-rich) clinopyroxene. Si-spinel is an important additional phase in this assemblage. This phase shows increasing amounts by approaching to 16 GPa.

  1. Development of project management data calculation models relating to dismantling of nuclear facilities. Contract research

    Energy Technology Data Exchange (ETDEWEB)

    Sukegawa, Takenori; Ohshima, Soichiro; Shiraishi, Kunio; Yanagihara, Satoshi [Department of Decommissioning and Waste Management, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai Ibaraki (Japan)

    1999-02-01

    Labor-hours necessary for dismantling activities are generally estimated based on experience, for example, as a form of unit productivity factors such as the relationship between labor-hours and weight of components dismantled which were obtained by actual dismantling activities. The project management data calculation models together with unit productivity factors for basic dismantling work activities were developed by analyzing the data obtained from the Japan Power Demonstration Reactor (JPDR) dismantling project, which will be applicable to estimation of labor-hours in various dismantling conditions. Typical work breakdown structures were also prepared by categorizing repeatable basic dismantling work activities for effective planning of dismantling activities. The labor-hours for dismantling the JPDR components and structures were calculated by using the code system for management of reactor decommissioning (COSMARD), in which the work breakdown structures and the calculation models were contained. It was confirmed that the labor-hours could be easily estimated by COSMARD through the calculations. This report describes the labor-hour calculation models and application of these models to COSMARD. (author)

  2. Relational Reasoning about Numbers and Operations--Foundation for Calculation Strategy Use in Multi-Digit Multiplication and Division

    Science.gov (United States)

    Schulz, Andreas

    2018-01-01

    Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…

  3. Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Kaplan, Daniel I. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2010-03-15

    The Savannah River Site disposes of low-activity radioactive waste within subsurface-engineered facilities. One of the tools used to establish the capacity of a given site to safely store radioactive waste (i.e., that a site does not exceed its Waste Acceptance Criteria) is the Performance Assessment (PA). The objective of this document is to provide the geochemical values for the PA calculations. This work is being conducted as part of the on-going maintenance program that permits the PA to periodically update existing calculations when new data become available.

  4. Energy-depth relation of electrons in bulk targets by Monte-Carlo calculations

    International Nuclear Information System (INIS)

    Gaber, M.; Fitting, H.J.

    1984-01-01

    Monte-Carlo calculations are used to calculate the energy of penetrating electrons as a function of the depth in thick targets of Ti, Fe, Cu, As, In, and Au. It is shown that the mean energy ratio anti E(z)/E 0 decays exponentially with depth z and depends on the backscattering coefficient eta/sub B/ of the bulk material and the maximum range R(E 0 ) of the primary electrons with initial energy E 0 . Thereby a normalized plot anti E/E 0 as a function of the reduced depth z/R becomes possible. (author)

  5. Calculation of the phonon density of states and related thermodynamic properties for trigonal selenium

    DEFF Research Database (Denmark)

    Hansen, Flemming Yssing; Alldredge, G. P.; McMurry, H. L.

    1983-01-01

    The phonon density of states for trigonal selenium has been calculated on the basis of a short range force model giving good overall agreement with experimental room temperature phonon dispersion data. A qualitative comparison with an experimental determination of the phonon density of states shows...... similarities in the gross features, but the experimental data lacks many of the finer details shown by the theoretical results due to resolution effects. The lattice dynamical contribution to the heat capacity CV is calculated and is found to be in good agreement with experimental determinations of Cp after...

  6. Electronic, vibrational and related properties of group IV metal oxides by ab initio calculations

    International Nuclear Information System (INIS)

    Leite Alves, H.W.; Silva, C.C.; Lino, A.T.; Borges, P.D.; Scolfaro, L.M.R.; Silva, E.F. da

    2008-01-01

    We present our theoretical results for the structural, electronic, vibrational and optical properties of MO 2 (M = Sn, Zr, Hf and Ti) obtained by first-principles calculations. Relativistic effects are demonstrated to be important for a realistic description of the detailed structure of the electronic frequency-dependent dielectric function, as well as of the carrier effective masses. Based on our results, we found that the main contribution of the high values calculated for the oxides dielectric constants arises from the vibrational properties of these oxides, and the vibrational static dielectric constant values diminish with increasing pressure

  7. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non-intercepted pr....... Strategies to prevent future prescription errors could usefully focus on integrated computerised systems that can aid dose calculations and reduce transcription errors between databases....

  8. Mathematics Anxiety and Mathematics Self-Efficacy in Relation to Medication Calculation Performance in Nurses

    Science.gov (United States)

    Melius, Joyce

    2012-01-01

    The purpose of this study is to identify and analyze the relationships that exist between mathematics anxiety and nurse self-efficacy for mathematics, and the medication calculation performance of acute care nurses. This research used a quantitative correlational research design and involved a sample of 84 acute care nurses, LVNs and RNs, from a…

  9. Hydraulic Calculations Relating to the Flooding and Draining of the Roman Colosseum for Naumachiae

    OpenAIRE

    Crapper, Martin

    2007-01-01

    This report includes full details of the calculations used in determining flows into and out of the Colosseum. It should be read in conjunction with the published paper in the Proceedings of ICE Civil Engineering 160 November 2007 Pages 184–191 Paper 900019.

  10. First and second chance fission calculations for actinides and related topics

    International Nuclear Information System (INIS)

    Maino, G.; Menapace, E.; Motta, M.; Ventura, A.

    1980-01-01

    First and second chance contributions to neutron induced fission cross sections in an energy range of interest for reactor applications (E/sub n/less than or equal to 13 MeV) were obtained by extensive and consistent calculations for 241 Am; moreover, a simplified semiempirical approach was applied to 235 U and 239 Pu

  11. Harris functional and related methods for calculating total energies in density-functional theory

    International Nuclear Information System (INIS)

    Averill, F.W.; Painter, G.S.

    1990-01-01

    The simplified energy functional of Harris has given results of useful accuracy for systems well outside the limits of weakly interacting fragments for which the method was originally proposed. In the present study, we discuss the source of the frequent good agreement of the Harris energy with full Kohn-Sham self-consistent results. A procedure is described for extending the applicability of the scheme to more strongly interacting systems by going beyond the frozen-atom fragment approximation. A gradient-force expression is derived, based on the Harris functional, which accounts for errors in the fragment charge representation. Results are presented for some diatomic molecules, illustrating the points of this study

  12. Calculation of ionization with an error estimate

    International Nuclear Information System (INIS)

    Klar, H.; Konovalov, D.A.; McCarthy, I.E.

    1993-09-01

    In a three-body model for ionisation of an atom by electron impact the author presents a formulation that has two terms. One is an approximation. The other is a correction to the approximation whose form shows how it can be minimized by appropriate choices of initial and final state vectors. The approximation is compared with experiment and the correction term is approximated and discussed. 16 refs., 1 fig

  13. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  14. Dissociated roles of the anterior cingulate cortex in reward and conflict processing as revealed by the feedback error-related negativity and N200.

    Science.gov (United States)

    Baker, Travis E; Holroyd, Clay B

    2011-04-01

    The reinforcement learning theory of the error-related negativity (ERN) holds that the impact of reward signals carried by the midbrain dopamine system modulates activity of the anterior cingulate cortex (ACC), alternatively disinhibiting and inhibiting the ACC following unpredicted error and reward events, respectively. According to a recent formulation of the theory, activity that is intrinsic to the ACC produces a component of the event-related brain potential (ERP) called the N200, and following unpredicted rewards, the N200 is suppressed by extrinsically applied positive dopamine reward signals, resulting in an ERP component called the feedback-ERN (fERN). Here we demonstrate that, despite extensive spatial and temporal overlap between the two ERP components, the functional processes indexed by the N200 (conflict) and the fERN (reward) are dissociable. These results point toward avenues for future investigation. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Insight and Lessons Learned on Organizational Factors and Safety Culture from the Review of Human Error-related Events of NPPs in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Tae; Lee, Dhong Hoon; Choi, Young Sung [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2014-08-15

    Event investigation is one of the key means of enhancing nuclear safety deriving effective measures and preventing recurrences. However, it is difficult to analyze organizational factors and safety culture. This paper tries to review human error-related events from perspectives of organizational factors and safety culture, and to derive insights and lessons learned in developing the regulatory infrastructure of plant oversight on safety culture.

  16. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study

    Science.gov (United States)

    Greenberg, Tsafrir; Chase, Henry W.; Almeida, Jorge R.; Stiffler, Richelle; Zevallos, Carlos R.; Aslam, Haris A.; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G.; Oquendo, Maria A.; McGrath, Patrick J.; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H.; Phillips, Mary L.

    2016-01-01

    Objective Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error-(discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. Method A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Results Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. Conclusions The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward

  17. Insight and Lessons Learned on Organizational Factors and Safety Culture from the Review of Human Error-related Events of NPPs in Korea

    International Nuclear Information System (INIS)

    Kim, Ji Tae; Lee, Dhong Hoon; Choi, Young Sung

    2014-01-01

    Event investigation is one of the key means of enhancing nuclear safety deriving effective measures and preventing recurrences. However, it is difficult to analyze organizational factors and safety culture. This paper tries to review human error-related events from perspectives of organizational factors and safety culture, and to derive insights and lessons learned in developing the regulatory infrastructure of plant oversight on safety culture

  18. Moderation of the Relationship Between Reward Expectancy and Prediction Error-Related Ventral Striatal Reactivity by Anhedonia in Unmedicated Major Depressive Disorder: Findings From the EMBARC Study.

    Science.gov (United States)

    Greenberg, Tsafrir; Chase, Henry W; Almeida, Jorge R; Stiffler, Richelle; Zevallos, Carlos R; Aslam, Haris A; Deckersbach, Thilo; Weyandt, Sarah; Cooper, Crystal; Toups, Marisa; Carmody, Thomas; Kurian, Benji; Peltier, Scott; Adams, Phillip; McInnis, Melvin G; Oquendo, Maria A; McGrath, Patrick J; Fava, Maurizio; Weissman, Myrna; Parsey, Ramin; Trivedi, Madhukar H; Phillips, Mary L

    2015-09-01

    Anhedonia, disrupted reward processing, is a core symptom of major depressive disorder. Recent findings demonstrate altered reward-related ventral striatal reactivity in depressed individuals, but the extent to which this is specific to anhedonia remains poorly understood. The authors examined the effect of anhedonia on reward expectancy (expected outcome value) and prediction error- (discrepancy between expected and actual outcome) related ventral striatal reactivity, as well as the relationship between these measures. A total of 148 unmedicated individuals with major depressive disorder and 31 healthy comparison individuals recruited for the multisite EMBARC (Establishing Moderators and Biosignatures of Antidepressant Response in Clinical Care) study underwent functional MRI during a well-validated reward task. Region of interest and whole-brain data were examined in the first- (N=78) and second- (N=70) recruited cohorts, as well as the total sample, of depressed individuals, and in healthy individuals. Healthy, but not depressed, individuals showed a significant inverse relationship between reward expectancy and prediction error-related right ventral striatal reactivity. Across all participants, and in depressed individuals only, greater anhedonia severity was associated with a reduced reward expectancy-prediction error inverse relationship, even after controlling for other symptoms. The normal reward expectancy and prediction error-related ventral striatal reactivity inverse relationship concords with conditioning models, predicting a shift in ventral striatal responding from reward outcomes to reward cues. This study shows, for the first time, an absence of this relationship in two cohorts of unmedicated depressed individuals and a moderation of this relationship by anhedonia, suggesting reduced reward-contingency learning with greater anhedonia. These findings help elucidate neural mechanisms of anhedonia, as a step toward identifying potential biosignatures

  19. Relationship between atomically related core levels and ground state properties of solids: first-principles calculations

    Czech Academy of Sciences Publication Activity Database

    Vackář, Jiří; Šipr, Ondřej; Šimůnek, Antonín

    2008-01-01

    Roč. 77, č. 4 (2008), 045112/1-045112/6 ISSN 1098-0121 R&D Projects: GA AV ČR IAA100100514; GA AV ČR(CZ) IAA100100637 Institutional research plan: CEZ:AV0Z10100520; CEZ:AV0Z10100521 Keywords : core levels * ab-initio calculations * electronic states * ground state properties Subject RIV: BE - Theoretical Physics Impact factor: 3.322, year: 2008

  20. Development of vicarious trial-and-error behavior in odor discrimination learning in the rat: relation to hippocampal function?

    Science.gov (United States)

    Hu, D; Griesbach, G; Amsel, A

    1997-06-01

    Previous work from our laboratory has suggested that hippocampal electrolytic lesions result in a deficit in simultaneous, black-white discrimination learning and reduce the frequency of vicarious trial-and-error (VTE) at a choice-point. VTE is a term Tolman used to describe the rat's conflict-like behavior, moving its head from one stimulus to the other at a choice point, and has been proposed as a major nonspatial feature of hippocampal function in both visual and olfactory discrimination learning. Simultaneous odor discrimination and VTE behavior were examined at three different ages. The results were that 16-day-old pups made fewer VTEs and learned much more slowly than 30- and 60-day-olds, a finding in accord with levels of hippocampal maturity in the rat.

  1. Calculation of the yearly energy performance of heating systems based on the European Building Energy Directive and related CEN Standards

    DEFF Research Database (Denmark)

    Olesen, Bjarne W.; de Carli, Michele

    2011-01-01

    According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting syst......–20% of the building energy demand. The additional loss depends on the type of heat emitter, type of control, pump and boiler. Keywords: Heating systems; CEN standards; Energy performance; Calculation methods......According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting...... systems. This energy declaration must refer to the primary energy or CO2 emissions. The European Organization for Standardization (CEN) has prepared a series of standards for energy performance calculations for buildings and systems. This paper presents related standards for heating systems. The relevant...

  2. Pre-Departure Clearance (PDC): An Analysis of Aviation Safety Reporting System Reports Concerning PDC Related Errors

    Science.gov (United States)

    Montalyo, Michael L.; Lebacqz, J. Victor (Technical Monitor)

    1994-01-01

    Airlines operating in the United States are required to operate under instrument flight rules (EFR). Typically, a clearance is issued via voice transmission from clearance delivery at the departing airport. In 1990, the Federal Aviation Administration (FAA) began deployment of the Pre-Departure Clearance (PDC) system at 30 U.S. airports. The PDC system utilizes aeronautical datalink and Aircraft Communication and Reporting System (ACARS) to transmit departure clearances directly to the pilot. An objective of the PDC system is to provide an immediate reduction in voice congestion over the clearance delivery frequency. Participating airports report that this objective has been met. However, preliminary analysis of 42 Aviation Safety Reporting System (ASRS) reports has revealed problems in PDC procedures and formatting which have caused errors in the proper execution of the clearance. It must be acknowledged that this technology, along with other advancements on the flightdeck, is adding more responsibility to the crew and increasing the opportunity for error. The present study uses these findings as a basis for further coding and analysis of an additional 82 reports obtained from an ASRS database search. These reports indicate that clearances are often amended or exceptions are added in order to accommodate local ATC facilities. However, the onboard ACARS is limited in its ability to emphasize or highlight these changes which has resulted in altitude and heading deviations along with increases in ATC workload. Furthermore, few participating airports require any type of PDC receipt confirmation. In fact, 35% of all ASRS reports dealing with PDC's include failure to acquire the PDC at all. Consequently, this study examines pilots' suggestions contained in ASRS reports in order to develop recommendations to airlines and ATC facilities to help reduce the amount of incidents that occur.

  3. Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site

    Energy Technology Data Exchange (ETDEWEB)

    Kaplan, Daniel I. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-07-22

    The Savannah River Site (SRS) disposes of low-level radioactive waste (LLW) and stabilizes high-level radioactive waste (HLW) tanks in the subsurface environment. Calculations used to establish the radiological limits of these facilities are referred to as Performance Assessments (PA), Special Analyses (SA), and Composite Analyses (CA). The objective of this document is to revise existing geochemical input values used for these calculations. This work builds on earlier compilations of geochemical data (2007, 2010), referred to a geochemical data packages. This work is being conducted as part of the on-going maintenance program of the SRS PA programs that periodically updates calculations and data packages when new information becomes available. Because application of values without full understanding of their original purpose may lead to misuse, this document also provides the geochemical conceptual model, the approach used for selecting the values, the justification for selecting data, and the assumptions made to assure that the conceptual and numerical geochemical models are reasonably conservative (i.e., bias the recommended input values to reflect conditions that will tend to predict the maximum risk to the hypothetical recipient). This document provides 1088 input parameters for geochemical parameters describing transport processes for 64 elements (>740 radioisotopes) potentially occurring within eight subsurface disposal or tank closure areas: Slit Trenches (ST), Engineered Trenches (ET), Low Activity Waste Vault (LAWV), Intermediate Level (ILV) Vaults, Naval Reactor Component Disposal Areas (NRCDA), Components-in-Grout (CIG) Trenches, Saltstone Facility, and Closed Liquid Waste Tanks. The geochemical parameters described here are the distribution coefficient, Kd value, apparent solubility concentration, ks value, and the cementitious leachate impact factor.

  4. Mechanical and thermomechanical calculations related to the storage of spent nuclear-fuel assemblies in granite

    International Nuclear Information System (INIS)

    Butkovich, T.R.

    1980-05-01

    A generic test of the geologic storage of spent-fuel assemblies is being made at Nevada Test Site. The spent-fuel assemblies were emplaced at a depth of 420 m (1370 ft) below the surface in a typical granite and will be retrieved at a later time. The early time, close-in thermal history of this type of repository is being simulated with spent-fuel and electrically heated canisters in a central drift, with auxiliary heaters in two parallel side drifts. Prior to emplacement of the spent-fuel canisters, preliminary calculations were made using a pair of existing finite-element codes, ADINA and ADINAT

  5. Standard practice for calculation of corrosion rates and related information from electrochemical measurements

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1989-01-01

    1.1 This practice covers the providing of guidance in converting the results of electrochemical measurements to rates of uniform corrosion. Calculation methods for converting corrosion current density values to either mass loss rates or average penetration rates are given for most engineering alloys. In addition, some guidelines for converting polarization resistance values to corrosion rates are provided. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard.

  6. The error-related negativity (ERN is an electrophysiological marker of motor impulsiveness on the Barratt Impulsiveness Scale (BIS-11 during adolescence

    Directory of Open Access Journals (Sweden)

    Jasmine B. Taylor

    2018-04-01

    Full Text Available Objectives: Previous studies have postulated that the error-related negativity (ERN may reflect individual differences in impulsivity; however, none have used a longitudinal framework or evaluated impulsivity as a multidimensional construct. The current study evaluated whether ERN amplitude, measured in childhood and adolescence, is predictive of impulsiveness during adolescence. Methods: Seventy-five children participated in this study, initially at ages 7–9 years and again at 12–18 years. The interval between testing sessions ranged from 5 to 9 years. The ERN was extracted in response to behavioural errors produced during a modified visual flanker task at both time points (i.e. childhood and adolescence. Participants also completed the Barratt Impulsiveness Scale − a measure that considers impulsiveness to comprise three core sub-traits − during adolescence. Results: At adolescence, the ERN amplitude was significantly larger than during childhood. Additionally, ERN amplitude during adolescence significantly predicted motor impulsiveness at that time point, after controlling for age, gender, and the number of trials included in the ERN. In contrast, ERN amplitude during childhood did not uniquely predict impulsiveness during adolescence. Conclusions: These findings provide preliminary evidence that ERN amplitude is an electrophysiological marker of self-reported motor impulsiveness (i.e. acting without thinking during adolescence. Keywords: Error-related negativity, ERN, Impulsivity, BIS, Development, Adolescence

  7. Methodology for the calculation of source terms related to irradiated fuel accumulated away from nuclear power plants

    International Nuclear Information System (INIS)

    Lima Filho, R.M.; Oliveira, L.F.S. de

    1984-01-01

    A general method for the calculation of the time evolution of source terms related to irradiated fuel is presented. Some applications are discussed which indicated that the method can provide important informations for the engineering design and safety analysis of a temporary storage facility of irradiated fuel elements. (Author) [pt

  8. Accommodation: The role of the external muscles of the eye: A consideration of refractive errors in relation to extraocular malfunction.

    Science.gov (United States)

    Hargrave, B K

    2014-11-01

    Speculation as to optical malfunction has led to dissatisfaction with the theory that the lens is the sole agent in accommodation and to the suggestion that other parts of the eye are also conjointly involved. Around half-a-century ago, Robert Brooks Simpkins suggested that the mechanical features of the human eye were precisely such as to allow for a lengthening of the globe when the eye accommodated. Simpkins was not an optical man but his theory is both imaginative and comprehensive and deserves consideration. It is submitted here that accommodation is in fact a twofold process, and that although involving the lens, is achieved primarily by means of a give - and - take interplay between adducting and abducting external muscles, whereby an elongation of the eyeball is brought about by a stretching of the delicate elastic fibres immediately behind the cornea. The three muscles responsible for convergence (superior, internal and inferior recti) all pull from in front backwards, while of the three abductors (external rectus and the two obliques) the obliques pull from behind forwards, allowing for an easy elongation as the eye turns inwards and a return to its original length as the abducting muscles regain their former tension, returning the eye to distance vision. In refractive errors, the altered length of the eyeball disturbs the harmonious give - and - take relationship between adductors and abductors. Such stresses are likely to be perpetuated and the error exacerbated. Speculation is not directed towards a search for a possible cause of the muscular imbalance, since none is suspected. Muscles not used rapidly lose tone, as evidenced after removal of a limb from plaster. Early attention to the need for restorative exercise is essential and results usually impressive. If flexibility of the external muscles of the eyes is essential for continuing good sight, presbyopia can be avoided and with it the supposed necessity of glasses in middle life. Early attention

  9. Calculations of the relative effectiveness of alanine for neutrons with energies up to 17.1 MeV

    International Nuclear Information System (INIS)

    Gerstenberg, H.M.; Coyne, J.J.

    1990-01-01

    The relative effectiveness (RE) of alanine has been calculated for neutrons using the RE of alanine for charged particles. The neutrons interact with one or more of the elements (hydrogen, carbon, nitrogen and oxygen) that compose the alanine. These interactions produce spectra of secondary charged particles consisting of ions of H, D, He, Be, B, C, N and O. From a combination of the calculated secondary charged particle spectra generated by the slowing down neutrons, and the calculated RE of the ions produced, a RE for the neutrons can be obtained. In addition, lineal energy spectra were determined for neutrons with energies up to 17.1 MeV interacting with alanine. An analytical code was used to calculate these spectra for a 1 μm diameter alanine cell surrounded by an alanine medium. For comparison, similar calculations were made for muscle tissue. Finally, the calculated differential RE was folded with dose distributions to obtain RE-weighted distributions for alanine. (author)

  10. Gas-Induced Water-hammer Loads Calculation for Safety Related Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seungchan; Yoon, Dukjoo [Korea Hydro and Nuclear Power Co., LTd, Daejeon (Korea, Republic of); Lee, Dooyong [Seoul National Univ., Seoul (Korea, Republic of)

    2013-05-15

    Of particular interest, gas accumulation can result in system pressure transient in pump discharge piping following a pump start. Consequently, this evolves into a gas-water, a water-hammer event and the accompanying force imbalances on the piping segments can be sufficient to challenge the piping supports and restraint. This paper describes an method performing to the water-hammer loads to determine the maximum loading that would occur in the piping system following the safety injection signal and to evaluate its integrity. For a given gas void volumes in the discharge piping, the result of the calculation shows the maximum loads of 18,894.2psi, which is smaller than the allowable criteria. Also, the maximum peak axial force imbalances acting on the support is 1,720lbf as above.

  11. Gas-Induced Water-hammer Loads Calculation for Safety Related Systems

    International Nuclear Information System (INIS)

    Lee, Seungchan; Yoon, Dukjoo; Lee, Dooyong

    2013-01-01

    Of particular interest, gas accumulation can result in system pressure transient in pump discharge piping following a pump start. Consequently, this evolves into a gas-water, a water-hammer event and the accompanying force imbalances on the piping segments can be sufficient to challenge the piping supports and restraint. This paper describes an method performing to the water-hammer loads to determine the maximum loading that would occur in the piping system following the safety injection signal and to evaluate its integrity. For a given gas void volumes in the discharge piping, the result of the calculation shows the maximum loads of 18,894.2psi, which is smaller than the allowable criteria. Also, the maximum peak axial force imbalances acting on the support is 1,720lbf as above

  12. Relative range error evaluation of terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target.

    Science.gov (United States)

    Muralikrishnan, Bala; Rachakonda, Prem; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cheok, Geraldine; Cournoyer, Luc

    2017-12-01

    Terrestrial laser scanners (TLS) are a class of 3D imaging systems that produce a 3D point cloud by measuring the range and two angles to a point. The fundamental measurement of a TLS is range. Relative range error is one component of the overall range error of TLS and its estimation is therefore an important aspect in establishing metrological traceability of measurements performed using these systems. Target geometry is an important aspect to consider when realizing the relative range tests. The recently published ASTM E2938-15 mandates the use of a plate target for the relative range tests. While a plate target may reasonably be expected to produce distortion free data even at far distances, the target itself needs careful alignment at each of the relative range test positions. In this paper, we discuss relative range experiments performed using a plate target and then address the advantages and limitations of using a sphere target. We then present a novel dual-sphere-plate target that draws from the advantages of the sphere and the plate without the associated limitations. The spheres in the dual-sphere-plate target are used simply as fiducials to identify a point on the surface of the plate that is common to both the scanner and the reference instrument, thus overcoming the need to carefully align the target.

  13. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  14. On the acceptor-related photoluminescence spectra of GaAs quantum-wire microcrystals: A model calculation

    International Nuclear Information System (INIS)

    Oliveira, L.E.; Porras Montenegro, N.; Latge, A.

    1992-07-01

    The acceptor-related photoluminescence spectrum of a GaAs quantum-wire microcrystal is theoretically investigated via a model calculation within the effective-mass approximation, with the acceptor envelope wave functions and binding energies calculated through a variational procedure. Typical theoretical photoluminescence spectra show two peaks associated to transitions from the n = 1 conduction subband electron gas to acceptors at the on-center and on-edge positions in the wire in good agreement with the recent experimental results by Hirum et al. (Appl. Phys. Lett. 59, 431 (1991)). (author). 14 refs, 3 figs

  15. Work in progress Tim Radford on research that aims to find a tiny error in Einstein's theory of special relativity

    CERN Multimedia

    Radford, T

    2004-01-01

    "Ben Varcoe wants to find a relatively small mistake in Einstein's theory of special relativity. To do this, he will slow light down from 300,000 km per second to 10 metres per second - about the speed of Darren Campbell - and see how it behaves" (1 page)

  16. Oxide nanostructures on a Nb surface and related systems: experiments and ab initio calculations

    International Nuclear Information System (INIS)

    Kuznetsov, Mikhail V; Razinkin, A S; Ivanovskii, Alexander L

    2011-01-01

    This review discusses the state of the art in two related research areas: the surfaces of niobium and of its related group IV-VI transition metals, and surface (primarily oxide) nanostructures that form on niobium (and group IV-VI d-metals) due to gas adsorption or impurity diffusion from the bulk. Experimental (X-ray photoelectron spectroscopy, photoelectron diffraction, scanning tunneling microscopy) and theoretical (ab initio simulation) results on d-metal surfaces are summarized and reviewed. (reviews of topical problems)

  17. Total deposition of inhaled particles related to age: comparison with age-dependent model calculations

    International Nuclear Information System (INIS)

    Becquemin, M.H.; Bouchikhi, A.; Yu, C.P.; Roy, M.

    1991-01-01

    To compare experimental data with age-dependent model calculations, total airway deposition of polystyrene aerosols (1, 2.05 and 2.8 μm aerodynamic diameter) was measured in ten adults, twenty children aged 12 to 15 years, ten children aged 8 to 12, and eleven under 8 years old. Ventilation was controlled, and breathing patterns were appropriate for each age, either at rest or at light exercise. Individually, deposition percentages increased with particle size and also from rest to exercise, except in children under 12 years, in whom they decreased from 20-21.5 to 14-14.5 for 1 μm particles and from 36.8-36.9 to 32.2-33.1 for 2.05 μm particles. Comparisons with the age-dependent model showed that, at rest, the observed data concerning children agreed with those predicted and were close to the adults' values, when the latter were higher than predicted. At exercise, child data were lower than predicted and lower than adult experimental data, when the latter agreed fairly well with the model. (author)

  18. Quantum mechanical calculations related to ionization and charge transfer in DNA

    International Nuclear Information System (INIS)

    Cauët, E; Liévin, J; Valiev, M; Weare, J H

    2012-01-01

    Ionization and charge migration in DNA play crucial roles in mechanisms of DNA damage caused by ionizing radiation, oxidizing agents and photo-irradiation. Therefore, an evaluation of the ionization properties of the DNA bases is central to the full interpretation and understanding of the elementary reactive processes that occur at the molecular level during the initial exposure and afterwards. Ab initio quantum mechanical (QM) methods have been successful in providing highly accurate evaluations of key parameters, such as ionization energies (IE) of DNA bases. Hence, in this study, we performed high-level QM calculations to characterize the molecular energy levels and potential energy surfaces, which shed light on ionization and charge migration between DNA bases. In particular, we examined the IEs of guanine, the most easily oxidized base, isolated and embedded in base clusters, and investigated the mechanism of charge migration over two and three stacked guanines. The IE of guanine in the human telomere sequence has also been evaluated. We report a simple molecular orbital analysis to explain how modifications in the base sequence are expected to change the efficiency of the sequence as a hole trap. Finally, the application of a hybrid approach combining quantum mechanics with molecular mechanics brings an interesting discussion as to how the native aqueous DNA environment affects the IE threshold of nucleobases.

  19. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    Directory of Open Access Journals (Sweden)

    Hoda Divsar

    2017-03-01

    Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.

  20. Relation between Euclidean and real time calculations of Green functions at finite temperature

    International Nuclear Information System (INIS)

    Bochkarev, A.

    1993-01-01

    We find a relation between the semiclassical approximation of the temperature (Matsubara) two-point correlator and the corresponding classical Green function in real time at finite temperature. The anharmonic oscillator at finite temperature is used to illustrate our statement, which is however of rather general origin

  1. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  2. Human errors and work performance in a nuclear power plant control room: associations with work-related factors and behavioral coping

    International Nuclear Information System (INIS)

    Kecklund, Lena Jacobsson; Svenson, Ola

    1997-01-01

    The present study investigated the relationships between the operator's appraisal of his own work situation and the quality of his own work performance as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. The decreased level of satisfaction with work result during outage is a fact despite the lowering of aspiration level for work performance quality during outage. In order to decrease relative frequencies for minor errors, special attention should be given to reduce work demands, such as time pressure and memory demands. In order to decrease misinterpretation errors special attention should be given to organizational factors such as planning and shift turnovers in addition to training. In summary, the outage period seems to be a significantly more vulnerable window in the management of a nuclear power plant than the normal power production state. Thus, an increased focus on the outage period and human factors issues, addressing the synergetic effects or work demands, organizational factors and coping resources is an important area for improvement of

  3. Object–relational architecture of information support of the multi-circuit calculation multilayer semiconductor nanostructures

    Directory of Open Access Journals (Sweden)

    Karina K. Abgaryan

    2015-06-01

    Full Text Available The article examines the object–relational approach to the creation of a database designed to provide informational support to the multiscale computational scheme of multilayer semiconductor nanostructures. The MSNS computational scheme developed earlier by our group uses a hierarchic representation of computational data obtained by various computational modules. Each layer of MSNS is treated separately. In contrast to well-known materials databases which serve for storing and retrieving of information on existing structures and their properties the database described in this paper is the central unit of the MSNS computational scheme. The database provides data interchange between various computational units. In this paper we describe the modern approach to material database design. More specifically, a data storage relational model which applies to solving resource-intensive and different-scale problems is proposed. An object–relational scheduler architecture is used in our work. It provides for high-speed data exchange between various computational units of the MSNS computational scheme. We introduce a simple and user-friendly interface allowing criteria-based data retrieving as well as creation of input files for computational modules. These approaches can be applied in various branches of science, including the aviation and space industry, in particular in control systems of engineering (materials science data.

  4. Eye-movement patterns during nonsymbolic and symbolic numerical magnitude comparison and their relation to math calculation skills.

    Science.gov (United States)

    Price, Gavin R; Wilkey, Eric D; Yeo, Darren J

    2017-05-01

    A growing body of research suggests that the processing of nonsymbolic (e.g. sets of dots) and symbolic (e.g. Arabic digits) numerical magnitudes serves as a foundation for the development of math competence. Performance on magnitude comparison tasks is thought to reflect the precision of a shared cognitive representation, as evidence by the presence of a numerical ratio effect for both formats. However, little is known regarding how visuo-perceptual processes are related to the numerical ratio effect, whether they are shared across numerical formats, and whether they relate to math competence independently of performance outcomes. The present study investigates these questions in a sample of typically developing adults. Our results reveal a pattern of associations between eye-movement measures, but not their ratio effects, across formats. This suggests that ratio-specific visuo-perceptual processing during magnitude processing is different across nonsymbolic and symbolic formats. Furthermore, eye movements are related to math performance only during symbolic comparison, supporting a growing body of literature suggesting symbolic number processing is more strongly related to math outcomes than nonsymbolic magnitude processing. Finally, eye-movement patterns, specifically fixation dwell time, continue to be negatively related to math performance after controlling for task performance (i.e. error rate and reaction time) and domain general cognitive abilities (IQ), suggesting that fluent visual processing of Arabic digits plays a unique and important role in linking symbolic number processing to formal math abilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  6. Selective attention and error processing in an illusory conjunction task - An event-related brain potential study

    NARCIS (Netherlands)

    Wijers, AA; Boksem, MAS

    2005-01-01

    We recorded event-related potentials in an illusory conjunction task, in which subjects were cued on each trial to search for a particular colored letter in a subsequently presented test array, consisting of three different letters in three different colors. In a proportion of trials the target

  7. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  8. Assessing the reliability of calculated catalytic ammonia synthesis rates

    DEFF Research Database (Denmark)

    Medford, Andrew James; Wellendorff, Jess; Vojvodic, Aleksandra

    2014-01-01

    We introduce a general method for estimating the uncertainty in calculated materials properties based on density functional theory calculations. We illustrate the approach for a calculation of the catalytic rate of ammonia synthesis over a range of transition-metal catalysts. The correlation...... between errors in density functional theory calculations is shown to play an important role in reducing the predicted error on calculated rates. Uncertainties depend strongly on reaction conditions and catalyst material, and the relative rates between different catalysts are considerably better described...

  9. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  10. On the relation between orbital-localization and self-interaction errors in the density functional theory treatment of organic semiconductors.

    Science.gov (United States)

    Körzdörfer, T

    2011-03-07

    It is commonly argued that the self-interaction error (SIE) inherent in semilocal density functionals is related to the degree of the electronic localization. Yet at the same time there exists a latent ambiguity in the definitions of the terms "localization" and "self-interaction," which ultimately prevents a clear and readily accessible quantification of this relationship. This problem is particularly pressing for organic semiconductor molecules, in which delocalized molecular orbitals typically alternate with localized ones, thus leading to major distortions in the eigenvalue spectra. This paper discusses the relation between localization and SIEs in organic semiconductors in detail. Its findings provide further insights into the SIE in the orbital energies and yield a new perspective on the failure of self-interaction corrections that identify delocalized orbital densities with electrons. © 2011 American Institute of Physics.

  11. Sample problem calculations related to two-phase flow transients in a PWR relief-piping network

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1981-03-01

    Two sample problems related with the fast transients of water/steam flow in the relief line of a PWR pressurizer were calculated with a network-flow analysis computer code STAC (System Transient-Flow Analysis Code). The sample problems were supplied by EPRI and are designed to test computer codes or computational methods to determine whether they have the basic capability to handle the important flow features present in a typical relief line of a PWR pressurizer. It was found necessary to implement into the STAC code a number of additional boundary conditions in order to calculate the sample problems. This includes the dynamics of the fluid interface that is treated as a moving boundary. This report describes the methodologies adopted for handling the newly implemented boundary conditions and the computational results of the two sample problems. In order to demonstrate the accuracies achieved in the STAC code results, analytical solutions are also obtained and used as a basis for comparison

  12. Calculation of sample problems related to two-phase flow blowdown transients in pressure relief piping of a PWR pressurizer

    International Nuclear Information System (INIS)

    Shin, Y.W.; Wiedermann, A.H.

    1984-02-01

    A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients

  13. Evidence that UV-inducible error-prone repair is absent in Haemophilus influenzae Rd, with a discussion of the relation to error-prone repair of alkylating-agent damage

    International Nuclear Information System (INIS)

    Kimball, R.F.; Boling, M.E.; Perdue, S.W.

    1977-01-01

    Haemophilus influenzae Rd and its derivatives are mutated either not at all or to only a very small extent by ultraviolet radiation, X-rays, methyl methanesulfonate, and nitrogen mustard, though they are readily mutated by such agents as N-methyl-N'-nitro-N-nitrosoguanidine, ethyl methanesulfonate, and nitrosocarbaryl (NC). In these respects H. influenzae Rd resembles the lexA mutants of Escherichia coli that lack the SOS or reclex UV-inducible error-prone repair system. This similarity is further brought out by the observation that chloramphenicol has little or no effect on post-replication repair after UV irradiation. In E. coli, chloramphenicol has been reported to considerably inhibit post-replication repair in the wild type but not in the lexA mutant. Earlier work has suggested that most or all the mutations induced in H. influenzae by NC result from error-prone repair. Combined treatment with NC and either X-rays or UV shows that the NC error-prone repair system does not produce mutations from the lesions induced by these radiations even while it is producing them from its own lesions. It is concluded that the NC error-prone repair system or systems and the reclex error-prone system are different

  14. Frequency of Home Numeracy Activities Is Differentially Related to Basic Number Processing and Calculation Skills in Kindergartners

    Science.gov (United States)

    Mutaf Yıldız, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert

    2018-01-01

    Home numeracy has been shown to play an important role in children’s mathematical performance. However, findings are inconsistent as to which home numeracy activities are related to which mathematical skills. The present study disentangled between various mathematical abilities that were previously masked by the use of composite scores of mathematical achievement. Our aim was to shed light on the specific associations between home numeracy and various mathematical abilities. The relationships between kindergartners’ home numeracy activities, their basic number processing and calculation skills were investigated. Participants were 128 kindergartners (Mage = 5.43 years, SD = 0.29, range: 4.88–6.02 years) and their parents. The children completed non-symbolic and symbolic comparison tasks, non-symbolic and symbolic number line estimation tasks, mapping tasks (enumeration and connecting), and two calculation tasks. Their parents completed a home numeracy questionnaire. Results indicated small but significant associations between formal home numeracy activities that involved more explicit teaching efforts (i.e., identifying numerals, counting) and children’s enumeration skills. There was no correlation between formal home numeracy activities and non-symbolic number processing. Informal home numeracy activities that involved more implicit teaching attempts, such as “playing games” and “using numbers in daily life,” were (weakly) correlated with calculation and symbolic number line estimation, respectively. The present findings suggest that disentangling between various basic number processing and calculation skills in children might unravel specific relations with both formal and informal home numeracy activities. This might explain earlier reported contradictory findings on the association between home numeracy and mathematical abilities. PMID:29623055

  15. Frequency of Home Numeracy Activities Is Differentially Related to Basic Number Processing and Calculation Skills in Kindergartners.

    Science.gov (United States)

    Mutaf Yıldız, Belde; Sasanguie, Delphine; De Smedt, Bert; Reynvoet, Bert

    2018-01-01

    Home numeracy has been shown to play an important role in children's mathematical performance. However, findings are inconsistent as to which home numeracy activities are related to which mathematical skills. The present study disentangled between various mathematical abilities that were previously masked by the use of composite scores of mathematical achievement. Our aim was to shed light on the specific associations between home numeracy and various mathematical abilities. The relationships between kindergartners' home numeracy activities, their basic number processing and calculation skills were investigated. Participants were 128 kindergartners ( M age = 5.43 years, SD = 0.29, range: 4.88-6.02 years) and their parents. The children completed non-symbolic and symbolic comparison tasks, non-symbolic and symbolic number line estimation tasks, mapping tasks (enumeration and connecting), and two calculation tasks. Their parents completed a home numeracy questionnaire. Results indicated small but significant associations between formal home numeracy activities that involved more explicit teaching efforts (i.e., identifying numerals, counting) and children's enumeration skills. There was no correlation between formal home numeracy activities and non-symbolic number processing. Informal home numeracy activities that involved more implicit teaching attempts , such as "playing games" and "using numbers in daily life," were (weakly) correlated with calculation and symbolic number line estimation, respectively. The present findings suggest that disentangling between various basic number processing and calculation skills in children might unravel specific relations with both formal and informal home numeracy activities. This might explain earlier reported contradictory findings on the association between home numeracy and mathematical abilities.

  16. The statistical error of Green's function Monte Carlo

    International Nuclear Information System (INIS)

    Ceperley, D.M.

    1986-01-01

    The statistical error in the ground state energy as calculated by Green's Function Monte Carlo (GFMC) is analyzed and a simple approximate formula is derived which relates the error to the number of steps of the random walk, the variational energy of the trial function, and the time step of the random walk. Using this formula it is argued that as the thermodynamic limit is approached with N identical molecules, the computer time needed to reach a given error per molecule increases as N/sup n/ where 0.5 < b < 1.5 and as the nuclear charge Z of a system is increased the computer time necessary to reach a given error grows as Z/sup 5.5/. Thus GFMC simulations will be most useful for calculating the properties of low Z elements. The implications for choosing the optimal trial function from a series of trial functions is also discussed

  17. A Statistical Model and Computer program for Preliminary Calculations Related to the Scaling of Sensor Arrays; TOPICAL

    International Nuclear Information System (INIS)

    Max Morris

    2001-01-01

    Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time

  18. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  19. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  20. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  1. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  2. Calculation of relative free energies for ligand-protein binding, solvation, and conformational transitions using the GROMOS software.

    Science.gov (United States)

    Riniker, Sereina; Christ, Clara D; Hansen, Halvor S; Hünenberger, Philippe H; Oostenbrink, Chris; Steiner, Denise; van Gunsteren, Wilfred F

    2011-11-24

    The calculation of the relative free energies of ligand-protein binding, of solvation for different compounds, and of different conformational states of a polypeptide is of considerable interest in the design or selection of potential enzyme inhibitors. Since such processes in aqueous solution generally comprise energetic and entropic contributions from many molecular configurations, adequate sampling of the relevant parts of configurational space is required and can be achieved through molecular dynamics simulations. Various techniques to obtain converged ensemble averages and their implementation in the GROMOS software for biomolecular simulation are discussed, and examples of their application to biomolecules in aqueous solution are given. © 2011 American Chemical Society

  3. TopoToolbox: using sensor topography to calculate psychologically meaningful measures from event-related EEG/MEG.

    Science.gov (United States)

    Tian, Xing; Poeppel, David; Huber, David E

    2011-01-01

    The open-source toolbox "TopoToolbox" is a suite of functions that use sensor topography to calculate psychologically meaningful measures (similarity, magnitude, and timing) from multisensor event-related EEG and MEG data. Using a GUI and data visualization, TopoToolbox can be used to calculate and test the topographic similarity between different conditions (Tian and Huber, 2008). This topographic similarity indicates whether different conditions involve a different distribution of underlying neural sources. Furthermore, this similarity calculation can be applied at different time points to discover when a response pattern emerges (Tian and Poeppel, 2010). Because the topographic patterns are obtained separately for each individual, these patterns are used to produce reliable measures of response magnitude that can be compared across individuals using conventional statistics (Davelaar et al. Submitted and Huber et al., 2008). TopoToolbox can be freely downloaded. It runs under MATLAB (The MathWorks, Inc.) and supports user-defined data structure as well as standard EEG/MEG data import using EEGLAB (Delorme and Makeig, 2004).

  4. Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508

    International Nuclear Information System (INIS)

    Hayek, A; Al Bokhaiti, M; Schwarz, M H; Boercsoek, J

    2012-01-01

    With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.

  5. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  6. An alternative to the balance error scoring system: using a low-cost balance board to improve the validity/reliability of sports-related concussion balance testing.

    Science.gov (United States)

    Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J

    2014-05-01

    Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.

  7. Análise do emprego do cálculo amostral e do erro do método em pesquisas científicas publicadas na literatura ortodôntica nacional e internacional Analysis of the use of sample size calculation and error of method in researches published in Brazilian and international orthodontic journals

    Directory of Open Access Journals (Sweden)

    David Normando

    2011-12-01

    method error in studies published in Brazil and in the United States of America. METHODS: Two major journals, according to CAPES (Brazilian Federal Agency for Support and Evaluation of Graduate Education, were analyzed through a hand search: Revista Dental Press de Ortodontia e Ortopedia Facial and the American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO. Only papers published between 2005 and 2008 were examined. RESULTS: Most of surveys published in both journals employed some method of error analysis, when this methodology can be applied. On the other hand, only a very small number of articles published in these journals have any description of how sample size was calculated. This proportion was 21.1% for the journal published in the United States (AJO-DO, and was significantly lower (p= 0.008 for the journal of orthodontics published in Brazil (3.9%. CONCLUSION: Researchers and the editorial board of both journals should drive greater concern for the examination of errors inherent in the absence of such analyses in scientific research, particularly the errors related to the use of an inadequate sample size.

  8. Identification of Hypertension Management-related Errors in a Personal Digital Assistant-based Clinical Log for Nurses in Advanced Practice Nurse Training

    Directory of Open Access Journals (Sweden)

    Nam-Ju Lee, DNSc, RN

    2010-03-01

    Conclusion: The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies.

  9. Quantifying behavioural determinants relating to health professional reporting of medication errors: a cross-sectional survey using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-11-01

    The aims of this study were to quantify the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE) and to explore any differences between respondents. A cross-sectional survey of patient-facing doctors, nurses and pharmacists within three major hospitals of Abu Dhabi, the UAE. An online questionnaire was developed based on the Theoretical Domains Framework (TDF, a framework of behaviour change theories). Principal component analysis (PCA) was used to identify components and internal reliability determined. Ethical approval was obtained from a UK university and all hospital ethics committees. Two hundred and ninety-four responses were received. Questionnaire items clustered into six components of knowledge and skills, feedback and support, action and impact, motivation, effort and emotions. Respondents generally gave positive responses for knowledge and skills, feedback and support and action and impact components. Responses were more neutral for the motivation and effort components. In terms of emotions, the component with the most negative scores, there were significant differences in terms of years registered as health professional (those registered longest most positive, p = 0.002) and age (older most positive, p Theoretical Domains Framework to quantify the behavioural determinants of health professional reporting of medication errors. • Questionnaire items relating to emotions surrounding reporting generated the most negative responses with significant differences in terms of years registered as health professional (those registered longest most positive) and age (older most positive) with no differences for gender and health profession. • Interventions based on behaviour change techniques mapped to emotions should be prioritised for development.

  10. Consideration of measurement errors in the analysis of the risk related to the exposure to ionising radiation in an occupational cohort: application to the French cohort of uranium miners

    International Nuclear Information System (INIS)

    Allodji, Rodrigue Setcheou

    2011-01-01

    In epidemiological studies, measurement errors in exposure can substantially bias the estimation of the risk associated to exposure. A broad variety of methods for measurement error correction has been developed, but they have been rarely applied in practice, probably because their ability to correct measurement error effects and their implementation are poorly understood. Another important reason is that many of the proposed correction methods require to know measurement errors characteristics (size, nature, structure and distribution). The aim of this thesis is to take into account measurement error in the analysis of risk of lung cancer death associated to radon exposure based on the French cohort of uranium miners. The mains stages were (1) to assess the characteristics (size, nature, structure and distribution) of measurement error in the French uranium miners cohort, (2) to investigate the impact of measurement error in radon exposure on the estimated excess relative risk (ERR) of lung cancer death associated to radon exposure, and (3) to compare the performance of methods for correction of these measurement error effects. The French cohort of uranium miners includes more than 5000 miners chronically exposed to radon with a follow-up duration of 30 years. Measurement errors have been characterized taking into account the evolution of uranium extraction methods and of radiation protection measures over time. A simulation study based on the French cohort of uranium miners has been carried out to investigate the effects of these measurement errors on the estimated ERR and to assess the performance of different methods for correcting these effects. Measurement error associated to radon exposure decreased over time, from more than 45% in the early 70's to about 10% in the late 80's. Its nature also changed over time from mostly Berkson to classical type from 1983. Simulation results showed that measurement error leads to an attenuation of the ERR towards the null

  11. [Prevention of Occupational Injuries Related to Hands: Calculation of Subsequent Injury Costs for the Austrian Social Occupational Insurance Institution (AUVA)].

    Science.gov (United States)

    Rauner, M S; Mayer, B; Schaffhauser-Linzatti, M M

    2015-08-01

    Occupational injuries cause short-term, direct costs as well as long-term follow-up costs over the lifetime of the casualties. Due to shrinking budgets accident insurance companies focus on cost reduction programmes and prevention measures. For this reason, a decision support system for consequential cost calculation of occupational injuries was developed for the main Austrian social occupational insurance institution (AUVA) during three projects. This so-called cost calculation tool combines the traditional instruments of accounting with quantitative methods such as micro-simulation. The cost data are derived from AUVA-internal as well as external economic data sources. Based on direct and indirect costs, the subsequent occupational accident costs from the time of an accident and, if applicable, beyond the death of the individual casualty are predicted for the AUVA, the companies in which the casualties are working, and the other economic sectors. By using this cost calculation tool, the AUVA classifies risk groups and derives related prevention campaigns. In the past, the AUVA concentrated on falling, accidents at construction sites and in agriculture/forestry, as well as commuting accidents. Currently, among others, a focus on hand injuries is given and first prevention programmes have been initiated. Hand injuries represent about 38% of all casualties with average costs of about 7,851 Euro/case. Main causes of these accidents are cutting injuries in production, agriculture, and forestry. Beside a low, but costly, number of amputations with average costs of more than 100,000 Euro/case, bone fractures and strains burden the AUVA-budget with about 17,500 and 10,500 € per case, respectively. Decision support systems such as this cost calculation tool represent necessary instruments to identify risk groups and their injured body parts, causes of accidents, and economic activities, which highly burden the budget of an injury company, and help derive

  12. Calculation of the relative chemical stabilities of proteins as a function of temperature and redox chemistry in a hot spring.

    Directory of Open Access Journals (Sweden)

    Jeffrey M Dick

    Full Text Available Uncovering the chemical and physical links between natural environments and microbial communities is becoming increasingly amenable owing to geochemical observations and metagenomic sequencing. At the hot spring known as Bison Pool in Yellowstone National Park, the cooling of the water in the outflow channel is associated with an increase in oxidation potential estimated from multiple field-based measurements. Representative groups of proteins whose sequences were derived from metagenomic data also exhibit an increase in average oxidation state of carbon in the protein molecules with distance from the hot-spring source. The energetic requirements of reactions to form selected proteins used in the model were computed using amino-acid group additivity for the standard molal thermodynamic properties of the proteins, and the relative chemical stabilities of the proteins were investigated by varying temperature, pH and oxidation state, expressed as activity of dissolved hydrogen. The relative stabilities of the proteins were found to track the locations of the sampling sites when the calculations included a function for hydrogen activity that increases with temperature and is higher, or more reducing, than values consistent with measurements of dissolved oxygen, sulfide and oxidation-reduction potential in the field. These findings imply that spatial patterns in the amino acid compositions of proteins can be linked, through energetics of overall chemical reactions representing the formation of the proteins, to the environmental conditions at this hot spring, even if microbial cells maintain considerably different internal conditions. Further applications of the thermodynamic calculations are possible for other natural microbial ecosystems.

  13. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    Science.gov (United States)

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world. Copyright © 2015 the authors 0270-6474/15/3514491-10$15.00/0.

  14. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  15. Self-identification and empathy modulate error-related brain activity during the observation of penalty shots between friend and foe

    Science.gov (United States)

    Ganesh, Shanti; van Schie, Hein T.; De Bruijn, Ellen R. A.; Bekkering, Harold

    2009-01-01

    The ability to detect and process errors made by others plays an important role is many social contexts. The capacity to process errors is typically found to rely on sites in the medial frontal cortex. However, it remains to be determined whether responses at these sites are driven primarily by action errors themselves or by the affective consequences normally associated with their commission. Using an experimental paradigm that disentangles action errors and the valence of their affective consequences, we demonstrate that sites in the medial frontal cortex (MFC), including the ventral anterior cingulate cortex (vACC) and pre-supplementary motor area (pre-SMA), respond to action errors independent of the valence of their consequences. The strength of this response was negatively correlated with the empathic concern subscale of the Interpersonal Reactivity Index. We also demonstrate a main effect of self-identification by showing that errors committed by friends and foes elicited significantly different BOLD responses in a separate region of the middle anterior cingulate cortex (mACC). These results suggest that the way we look at others plays a critical role in determining patterns of brain activation during error observation. These findings may have important implications for general theories of error processing. PMID:19015079

  16. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  17. Means and method of sampling flow related variables from a waterway in an accurate manner using a programmable calculator

    Science.gov (United States)

    Rand E. Eads; Mark R. Boolootian; Steven C. [Inventors] Hankin

    1987-01-01

    Abstract - A programmable calculator is connected to a pumping sampler by an interface circuit board. The calculator has a sediment sampling program stored therein and includes a timer to periodically wake up the calculator. Sediment collection is controlled by a Selection At List Time (SALT) scheme in which the probability of taking a sample is proportional to its...

  18. Errors in determination of irregularity factor for distributed parameters in a reactor core

    International Nuclear Information System (INIS)

    Vlasov, V.A.; Zajtsev, M.P.; Il'ina, L.I.; Postnikov, V.V.

    1988-01-01

    Two types errors (measurement error and error of regulation of reactor core distributed parameters), offen met during high-power density reactor operation, are analyzed. Consideration is given to errors in determination of irregularity factor for radial power distribution for a hot channel under conditions of its minimization and for the conditions when the regulation of relative power distribution is absent. The first regime is investigated by the method of statistic experiment using the program of neutron-physical calculation optimization taking as an example a large channel water cooled graphite moderated reactor. It is concluded that it is necessary, to take into account the complex interaction of measurement error with the error of parameter profiling over the core both for conditions of continuous manual or automatic parameter regulation (optimization) and for the conditions without regulation namely at a priore equalized distribution. When evaluating the error of distributed parameter control

  19. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  20. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  1. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  2. [The approaches to factors which cause medication error--from the analyses of many near-miss cases related to intravenous medication which nurses experienced].

    Science.gov (United States)

    Kawamura, H

    2001-03-01

    Given the complexity of the intravenous medication process, systematic thinking is essential to reduce medication errors. Two thousand eight hundred cases of 'Hiyari-Hatto' were analyzed. Eight important factors which cause intravenous medication error were clarified as a result. In the following I summarize the systematic approach for each factor. 1. Failed communication of information: illegible handwritten orders, and inaccurate verbal orders and copying cause medication error. Rules must be established to prevent miscommunication. 2. Error-prone design of the hardware: Look-alike packaging and labeling of drugs and the poor design of infusion pumps cause errors. The human-hardware interface should be improved by error-resistant design by manufacturers. 3. Patient names similar to simultaneously operating surgical procedures and interventions: This factor causes patient misidentification. Automated identification devices should be introduced into health care settings. 4. Interruption in the middle of tasks: The efficient assignment of medical work and business work should be made. 5. Inaccurate mixing procedure and insufficient mixing space: Mixing procedures must be standardized and the layout of the working space must be examined. 6. Time pressure: Mismatch between workload and manpower should be improved by reconsidering the work to be done. 7. Lack of information about high alert medications: The pharmacist should play a greater role in the medication process overall. 8. Poor knowledge and skill of recent graduates: Training methods and tools to prevent medication errors must be developed.

  3. Results of calculations of external gamma radiation exposure rates from local fallout and the related radionuclide compositions of two hypothetical 1-MT nuclear bursts. Final report

    International Nuclear Information System (INIS)

    Hicks, H.

    1984-12-01

    This report presents data on calculated gamma radiation exposure rates and local surface deposition of related radionuclides resulting from two hypothetical 1-Mt nuclear bursts. Calculations are made of the debris from two types of bombs: one containing 235 U as a fissionable material (designated oralloy), the other containing 238 U (designated tuballoy). 4 references

  4. Calculation of Relative Binding Free Energy in the Water-Filled Active Site of Oligopeptide-Binding Protein A.

    Science.gov (United States)

    Maurer, Manuela; de Beer, Stephanie B A; Oostenbrink, Chris

    2016-04-15

    The periplasmic oligopeptide binding protein A (OppA) represents a well-known example of water-mediated protein-ligand interactions. Here, we perform free-energy calculations for three different ligands binding to OppA, using a thermodynamic integration approach. The tripeptide ligands share a high structural similarity (all have the sequence KXK), but their experimentally-determined binding free energies differ remarkably. Thermodynamic cycles were constructed for the ligands, and simulations conducted in the bound and (freely solvated) unbound states. In the unbound state, it was observed that the difference in conformational freedom between alanine and glycine leads to a surprisingly slow convergence, despite their chemical similarity. This could be overcome by increasing the softness parameter during alchemical transformations. Discrepancies remained in the bound state however, when comparing independent simulations of the three ligands. These difficulties could be traced to a slow relaxation of the water network within the active site. Fluctuations in the number of water molecules residing in the binding cavity occur mostly on a timescale larger than the simulation time along the alchemical path. After extensive simulations, relative binding free energies that were converged to within thermal noise could be obtained, which agree well with available experimental data.

  5. Application of genomic densitometry for calculating the relative population of Escherichia Coli in the intestine of broiler chicks

    Directory of Open Access Journals (Sweden)

    A.R Seidavi

    2009-05-01

    Full Text Available In this study, the densitometry technique for calculating of the relative population of Escherichia coli in various segments of the intestine of broiler chicks was evaluated. Following preparation of the intestinal contents, the process of extraction and purification of DNA from the contents of duodenum, jejunum, ileum and cecum was undertaken. A specific polymerase chain reaction (PCR using two pairs of primers was employed to detect Escherichia coli and total bacteria present in the gastrointestinal tract of the chicks. Specific bands of E.coli were obtained using densitometry and Gel Proc Analyzer software based on linear regression with extrapolation. E.coli populations at different ages were also determined in various segments of the gastrointestinal tract of the chicks. The Results of this experiment indicated that 0.000004%, 0.07%, 0.64% and 2.51% of total bacteria present in the duodenum, jejunum, ileum and cecum respectively consisted of E.coli. Also, E.coli constitutes 1.76, 0.01 and 0.80% of the total intestinal bacteria of chicks at 4, 14 and 30 days of age respectively. Furthermore, it was shown that at 4 days of age, 0.30, 2.05 and 3.97% of the total bacteria present in the jejunum, ileum and cecum respectively were from E.coli species and this bacteria was absent in the duodenum. At 14 days of age these figures were 0.000009%, 0.00011% and 0.08% respectively while at 30 days of age 0.00011%, 0.009% and 2.40% of all bacteria in the duodenum, ileum and cecum were E.coli species and this bacteria was absent in the jejunum. In conclusion, the densitometry method based on PCR results can be regarded as a useful tool for densitometry the relative population E.coli in the gastrointestinal tract of poultry.

  6. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  7. Results of calculations of external gamma radiation exposure rates from fallout and the related radionuclide compositions. Operation Tumbler-Snapper, 1952

    International Nuclear Information System (INIS)

    Hicks, H.G.

    1981-07-01

    This report presents data on calculated gamma radiation exposure rates and ground deposition of related radionuclides resulting from Events that deposited detectable radioactivity outside the Nevada Test Site complex

  8. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    ALPO is an Average Leaf Pair Opening (the concept of ALPO was previously introduced by us in Med. Phys. 28, 2220-2226 (2001). Therefore, dose errors associated with RLP errors are larger for fields requiring small leaf gaps. For an N-field IMRT plan, we demonstrate that the total fluence error (if we neglect inhomogeneities and scatter) is proportional to 1/√(N), where N is the number of fields, which slightly reduces the impact of RLP errors of individual fields on the total fluence error. We tested and applied the analytical apparatus in the context of commercial inverse treatment planning systems used in our clinics (Helios TM and BrainScan TM ). We determined the actual distribution of leaf-positional errors by studying MLC controller (Varian Mark II and Brainlab Novalis MLCs) log files created by the controller after each field delivery. The analytically derived relationship between fluence error and RLP errors was confirmed by numerical simulations. The equivalence of relative fluence error to relative dose error was verified by a direct dose calculation. We also experimentally verified the truthfulness of fluences derived from the log file data by comparing them to film data

  9. Analysis of Task Types and Error Types of the Human Actions Involved in the Human-related Unplanned Reactor Trip Events

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun; Jung, Won Dea

    2008-02-01

    This report provides the task types and error types involved in the unplanned reactor trip events that have occurred during 1986 - 2006. The events that were caused by the secondary system of the nuclear power plants amount to 67 %, and the remaining 33 % was by the primary system. The contribution of the activities of the plant personnel was identified as the following order: corrective maintenance (25.7 %), planned maintenance (22.8 %), planned operation (19.8 %), periodic preventive maintenance (14.9 %), response to a transient (9.9 %), and design/manufacturing/installation (9.9%). According to the analysis of error modes, the error modes such as control failure (22.2 %), wrong object (18.5 %), omission (14.8 %), wrong action (11.1 %), and inadequate (8.3 %) take up about 75 % of all the unplanned trip events. The analysis of the cognitive functions involved showed that the planning function makes the highest contribution to the human actions leading to unplanned reactor trips, and it is followed by the observation function (23.4%), the execution function (17.8 %), and the interpretation function (10.3 %). The results of this report are to be used as important bases for development of the error reduction measures or development of the error mode prediction system for the test and maintenance tasks in nuclear power plants

  10. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    Science.gov (United States)

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  11. Analysis of Task Types and Error Types of the Human Actions Involved in the Human-related Unplanned Reactor Trip Events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Park, Jin Kyun; Jung, Won Dea

    2008-02-15

    This report provides the task types and error types involved in the unplanned reactor trip events that have occurred during 1986 - 2006. The events that were caused by the secondary system of the nuclear power plants amount to 67 %, and the remaining 33 % was by the primary system. The contribution of the activities of the plant personnel was identified as the following order: corrective maintenance (25.7 %), planned maintenance (22.8 %), planned operation (19.8 %), periodic preventive maintenance (14.9 %), response to a transient (9.9 %), and design/manufacturing/installation (9.9%). According to the analysis of error modes, the error modes such as control failure (22.2 %), wrong object (18.5 %), omission (14.8 %), wrong action (11.1 %), and inadequate (8.3 %) take up about 75 % of all the unplanned trip events. The analysis of the cognitive functions involved showed that the planning function makes the highest contribution to the human actions leading to unplanned reactor trips, and it is followed by the observation function (23.4%), the execution function (17.8 %), and the interpretation function (10.3 %). The results of this report are to be used as important bases for development of the error reduction measures or development of the error mode prediction system for the test and maintenance tasks in nuclear power plants.

  12. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  13. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  14. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  15. Electronic structure and related properties of ferrocyanide ion calculated by the SCF Xα-scattered wave method

    International Nuclear Information System (INIS)

    Guenzburger, D.; Maffeo, B.; Siqueira, M.L. de

    1975-08-01

    The SCF-XαSW method is used to calculate the electronic structure of the ferrocyanide ion. Optical transitions and X-Ray photoelectron emission are obtained from the energy level scheme and compared with experimental results. The charge density in the Fe nucleus is also computed and the result is correlated with isomer shift measurements made on this and other Fe complexes for which theoretical calculations have been performed

  16. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors; ERESYE - un sistema esperto per la valutazione di incertezze correlate ad errori sperimentali sistematici

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, T; Panini, G C [ENEA - Dipartimento Tecnologie Intersettoriali di Base, Centro Ricerche Energia, Casaccia (Italy); Amoroso, A [Ricercatore Ospite (Italy)

    1989-11-15

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  17. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  18. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  19. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  20. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  1. Calculation and Analysis of Differential Corrections for BeiDou

    Science.gov (United States)

    Yang, Sainan; Chen, Junping; Zhang, Yize

    2015-04-01

    BeiDou Satellite Navigation System has been providing service forAsia-Pacific area. BeiDou uses observations of regional monitoring network to determine satellite orbit, which limits the satellite orbit accuracy. And the satellite clock error is produced by time synchronization system. The time synchronization delay of antenna device is general obtained through prior Calibration, and the residual calibration error is included in the satellite clock, which affects the prediction accuracy of satellite clock error. In this paper, we study the algorithms of Beidou differential corrections to improve the accuracy of satellite signals to improve the user positioning accuracy. In this algorithm, both pseudo-range and phase observations are used to calculate differential corrections. We process pseudo-range observations to obtain equivalent satellite clock error, which include satellite clock errors and orbit radial errors, as well as the average projection of orbit tangential and normal errors in combination. And the epoch-difference of phase observations are processed to eliminate the ambiguity which simplifies algorithms and ensure the relative accuracy (corrections variety between the epochs). Observations more than 10 stations in China are processed, and the equivalent clock error calculation results are analyzed, which shows that the satellite UDRE are significantly reduced and user location accuracy improves when the equivalent clock error corrections are applied. The residuals deducting equivalent satellite clock error contains the projection difference of satellite orbit error in all station (tangential and normal errors are main). We utilize the residuals to solve the tangential and normal orbit errors which cause the projection difference. The same observation data is processed. The results show that after calculating three-dimensional corrections, the satellite UDRE doesn't improve significantly compared to equivalent satellite clock error corrections and user

  2. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection

    Science.gov (United States)

    Buschang, Rebecca Ellen

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  3. The Impact of Short-Term Science Teacher Professional Development on the Evaluation of Student Understanding and Errors Related to Natural Selection. CRESST Report 822

    Science.gov (United States)

    Buschang, Rebecca E.

    2012-01-01

    This study evaluated the effects of a short-term professional development session. Forty volunteer high school biology teachers were randomly assigned to one of two professional development conditions: (a) developing deep content knowledge (i.e., control condition) or (b) evaluating student errors and understanding in writing samples (i.e.,…

  4. The spatial distribution of errors made by rats in Hebb-Williams type mazes in relation to the spatial properties of the blind alleys

    NARCIS (Netherlands)

    Boer, S. de; Bohus, B.

    The various configurations in series of Hebb-Williams type of mazes, which are used to measure problem solving behaviour in rats, differ markedly in structure. The relationship between error behaviour and spatial maze structure in control rats tested in a number of pharmacological experiments is

  5. Developmental Aspects of Error and High-Conflict-Related Brain Activity in Pediatric Obsessive-Compulsive Disorder: A FMRI Study with a Flanker Task before and after CBT

    Science.gov (United States)

    Huyser, Chaim; Veltman, Dick J.; Wolters, Lidewij H.; de Haan, Else; Boer, Frits

    2011-01-01

    Background: Heightened error and conflict monitoring are considered central mechanisms in obsessive-compulsive disorder (OCD) and are associated with anterior cingulate cortex (ACC) function. Pediatric obsessive-compulsive patients provide an opportunity to investigate the development of this area and its associations with psychopathology.…

  6. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations

    NARCIS (Netherlands)

    Derks, E. M.; Zwinderman, A. H.; Gamazon, E. R.

    2017-01-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (FST) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates;

  7. DETERMINATION OF ANALYTICAL CALCULATION ERROR OF MAGNETIC FIELD OF HIGH-VOLTAGE CABLE LINES WITH TWO-POINT BONDED CABLE SHIELDS CAUSED BY NON-UNIFORM CURRENT DISTRIBUTION IN THE SHIELDS

    Directory of Open Access Journals (Sweden)

    M. I. Baranov

    2017-06-01

    Full Text Available Purpose. To obtain new calculation correlations, determining approximate energy dissipation and electric erosion of massive basic metallic electrodes in the high-voltage high-current air switchboard (HVCAS of atmospheric pressure, in-use in the bit chain of the high-voltage electrophysics setting (HVES with the powerful capacity store of energy (CSE. Methodology. Electrophysics bases of technique of high-voltage and large impulsive currents (LIC, scientific and technical bases of development and planning of high-voltage heavy-current impulsive electro-devices, including HVES and powerful CSE, and also methods of measuring in their bit chains of LIC of the microsecond temporal range. Results. On the basis of new engineering approach the results of calculation estimation of excretions energy and electric erosion of massive basic metallic electrodes are resulted in probed HVCAS. New correlations are obtained for the approximate calculation of thermal energy, selected in an impulsive air spark and on the workings surfaces of anode and cathode of HVCAS. It is entered and a new electrophysics concept, touching equivalent active resistance of impulsive air spark, is mathematically certain. New formulas are obtained for the approximate calculation of most depth of single round crater of destruction on the workings surfaces of basic metallic electrodes of HVCAS, and also mass of metal, thrown out magnetic pressure from this crater of destruction on the electrodes of switch for one electric discharge through them powerful CSE HVES. It is shown that the radius of the indicated single crater of destruction is approximately equal to the maximal radius of plasma channel of a spark discharge between a cathode and anode of HVCAS. The executed high-current experiments in the bit chain of HVES with powerful CSE validated row of the got and in-use calculation correlations for the estimation of energy dissipation and electric erosion of metallic electrodes in

  8. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  9. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    Science.gov (United States)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  10. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  11. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  12. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  13. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Noninvasive method using multidetector CT for calculating the relative blood supply ratio of duplicated renal arteries in renal donors

    International Nuclear Information System (INIS)

    Kuwabara, Masatomo; Kim, Tonsok; Nakamura, Hironobu; Narumi, Yoshifumi; Takahashi, Satoru; Sato, Yoshinobu; Murakami, Takamichi

    2006-01-01

    The aim of this study was to evaluate the correlation between the renal artery cross-sectional area measured by multidetector computed tomography (MDCT) and the nephrogram area calculated by renal arteriography in potential living renal donors with duplicated renal arteries. Medical records of 18 patients with duplicated renal arteries who underwent both MDCT angiography and renal arteriography between 2001 and 2003 were retrospectively reviewed. All 20 kidneys were evaluated. Renal artery cross-sectional areas were measured using the workstation to which the CT data were transferred; the nephrogram areas on the digitized angiographic images were calculated using public domain software. Bland-Altman analysis was performed to compare the cross-sectional area ratio of the accessory arteries to the main renal arteries, with the ratios obtained from the nephrogram areas calculated from the arteriograms. The mean cross-sectional areas of the accessory and main renal arteries were 6.78 and 20.9 mm 2 , respectively. The ratio of the nephrogram areas calculated from the arteriograms ranged from 0.094 to 0.809. Bland-Altman analysis showed no significant difference. It is possible to predict the supply volume of accessory renal arteries by measuring the cross-sectional area of the accessory and main renal arteries in potential living renal donors. (author)

  15. Toward calculations of the 129Xe chemical shift in Xe@C60 at experimental conditions: Relativity, correlation, and dynamics

    Czech Academy of Sciences Publication Activity Database

    Straka, Michal; Lantto, P.; Vaara, J.

    2008-01-01

    Roč. 112, č. 12 (2008), s. 2658-2668 ISSN 1089-5639 Institutional research plan: CEZ:AV0Z40550506 Keywords : NMR * theoretical calculations * role of dynamics Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.871, year: 2008

  16. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  17. On the Spatial and Temporal Sampling Errors of Remotely Sensed Precipitation Products

    Directory of Open Access Journals (Sweden)

    Ali Behrangi

    2017-11-01

    Full Text Available Observation with coarse spatial and temporal sampling can cause large errors in quantification of the amount, intensity, and duration of precipitation events. In this study, the errors resulting from temporal and spatial sampling of precipitation events were quantified and examined using the latest version (V4 of the Global Precipitation Measurement (GPM mission integrated multi-satellite retrievals for GPM (IMERG, which is available since spring of 2014. Relative mean square error was calculated at 0.1° × 0.1° every 0.5 h between the degraded (temporally and spatially and original IMERG products. The temporal and spatial degradation was performed by producing three-hour (T3, six-hour (T6, 0.5° × 0.5° (S5, and 1.0° × 1.0° (S10 maps. The results show generally larger errors over land than ocean, especially over mountainous regions. The relative error of T6 is almost 20% larger than T3 over tropical land, but is smaller in higher latitudes. Over land relative error of T6 is larger than S5 across all latitudes, while T6 has larger relative error than S10 poleward of 20°S–20°N. Similarly, the relative error of T3 exceeds S5 poleward of 20°S–20°N, but does not exceed S10, except in very high latitudes. Similar results are also seen over ocean, but the error ratios are generally less sensitive to seasonal changes. The results also show that the spatial and temporal relative errors are not highly correlated. Overall, lower correlations between the spatial and temporal relative errors are observed over ocean than over land. Quantification of such spatiotemporal effects provides additional insights into evaluation studies, especially when different products are cross-compared at a range of spatiotemporal scales.

  18. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    Science.gov (United States)

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  19. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  20. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  1. Relation between calculated Lennard-Jones potential and thermal stability of Cu-based bulk metallic glasses

    International Nuclear Information System (INIS)

    Lin, T.; Bian, X.F.; Jiang, J.

    2006-01-01

    Two metallic bulk glasses, Cu 60 Zr 30 Ti 10 and Cu 47 Ti 33 Zr 11 Ni 8 Si 1 , with a diameter of 3 mm were prepared by copper mold casting method. Dilatometric measurement was carried out on the two glassy alloys to obtain information about the average nearest-neighbour distance r 0 and the effective depth of pair potential V 0 . By assuming a Lennard-Jones potential, r 0 and V 0 were calculated to be 0.28 nm and 0.16 eV for Cu 60 Zr 30 Ti 10 and 0.27 nm and 0.13 eV for Cu 47 Ti 33 Zr 11 Ni 8 Si 1 , respectively. It was found that the glassy alloy Cu 60 Zr 30 Ti 10 was more stable than Cu 47 Ti 33 Zr 11 Ni 8 Si 1 against heating from both experiment and calculation

  2. Some applications of the particle-in-a-box eigenfunctions: fast-convergent variational and related calculations

    International Nuclear Information System (INIS)

    Pathak, R.K.; Chandra, A.K.; Bhattacharyya, K.

    1994-01-01

    Eigenfunctions of the quantum mechanical particle-in-a-box problem are shown to lead to a new trigonometric expansion scheme with good convergence properties. This hitherto unexplored expansion strategy is found to be quite efficient in variational calculations and as an alternative to the Fourier series. Demonstrative computations involve a few one-dimensional models of confining potentials for bound states and pulses of various shapes in signal analysis. ((orig.))

  3. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  4. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  5. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  6. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  7. Yaw Angle Error Compensation for Airborne 3-D SAR Based on Wavenumber-domain Subblock

    Directory of Open Access Journals (Sweden)

    Ding Zhen-yu

    2015-08-01

    Full Text Available Airborne array antenna SAR is used to obtain three-dimensional imaging; however it is impaired by motion errors. In particular, rotation error changes the relative position among the different antenna units and strongly affects the image quality. Unfortunately, the presently available algorithm can not compensate for the rotation error. In this study, an airborne array antenna SAR three-dimensional imaging model is discussed along with the effect of rotation errors, and more specifically, the yaw angle error. The analysis reveals that along- and cross-track wavenumbers can be obtained from the echo phase, and when used to calculate the range error, these wavenumbers lead to a target position irrelevant result that eliminates the error's spatial variance. Therefore, a wavenumber-domain subblock compensation method is proposed by computing the range error in the subblock of the along- and cross-track 2-D wavenumber domain and precisely compensating for the error in the space domain. Simulations show that the algorithm can compensate for the effect of yaw angle error.

  8. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  9. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  10. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  11. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  12. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  13. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  14. Effect of Relative Marker Movement on the Calculation of the Foot Torsion Axis Using a Combined Cardan Angle and Helical Axis Approach

    Directory of Open Access Journals (Sweden)

    Eveline S. Graf

    2012-01-01

    Full Text Available The two main movements occurring between the forefoot and rearfoot segment of a human foot are flexion at the metatarsophalangeal joints and torsion in the midfoot. The location of the torsion axis within the foot is currently unknown. The purpose of this study was to develop a method based on Cardan angles and the finite helical axis approach to calculate the torsion axis without the effect of flexion. As the finite helical axis method is susceptible to error due to noise with small helical rotations, a minimal amount of rotation was defined in order to accurately determine the torsion axis location. Using simulation, the location of the axis based on data containing noise was compared to the axis location of data without noise with a one-sample t-test and Fisher's combined probability score. When using only data with helical rotation of seven degrees or more, the location of the torsion axis based on the data with noise was within 0.2 mm of the reference location. Therefore, the proposed method allowed an accurate calculation of the foot torsion axis location.

  15. Effect of Relative Marker Movement on the Calculation of the Foot Torsion Axis Using a Combined Cardan Angle and Helical Axis Approach

    Science.gov (United States)

    Graf, Eveline S.; Wright, Ian C.; Stefanyshyn, Darren J.

    2012-01-01

    The two main movements occurring between the forefoot and rearfoot segment of a human foot are flexion at the metatarsophalangeal joints and torsion in the midfoot. The location of the torsion axis within the foot is currently unknown. The purpose of this study was to develop a method based on Cardan angles and the finite helical axis approach to calculate the torsion axis without the effect of flexion. As the finite helical axis method is susceptible to error due to noise with small helical rotations, a minimal amount of rotation was defined in order to accurately determine the torsion axis location. Using simulation, the location of the axis based on data containing noise was compared to the axis location of data without noise with a one-sample t-test and Fisher's combined probability score. When using only data with helical rotation of seven degrees or more, the location of the torsion axis based on the data with noise was within 0.2 mm of the reference location. Therefore, the proposed method allowed an accurate calculation of the foot torsion axis location. PMID:22666303

  16. Resistive wall modes and error field amplification

    International Nuclear Information System (INIS)

    Boozer, Allen H.

    2003-01-01

    Resistive wall modes and the rapid damping of plasma rotation by the amplification of magnetic field errors are related physical phenomena that affect the performance of the advanced tokamak and spherical torus plasma confinement devices. Elements of our understanding of these phenomena and the code that is used to design the major experimental facilities are based on the electrical circuit representation of the response of the plasma to perturbations. Although the circuit representation of the plasma may seem heuristic, this representation can be rigorously obtained using Maxwell's equations and linearity for plasmas that evolve on a disparate time scale from that of external currents. These and related results are derived. In addition methods are given for finding the plasma information that the circuit representation requires using post-processors for codes that calculate perturbed plasma equilibria

  17. PSYCRODATA: a software which calculates the air humidity characteristics and relate its with the variations of the gamma environmental bottom

    International Nuclear Information System (INIS)

    Alonso A, D.; Dominguez L, O.; Ramos V, O.; Caveda R, C.A.; Capote F, E.; Dominguez G, A.; Valdes S, E.; Rodriguez V, E.

    2006-01-01

    The computer tool 'Psycrodata', able to calculate the values of those characteristics of the humidity of the air starting from the measurements carried out of humidity and temperature in the post of occident of the National Net of Environmental Radiological Surveillance was obtained. Among the facilities that 'Psycrodata' toasts it is the keeping the obtained information in a database facilitating the making of reports. For another part the possibility of selection of different approaches for the calculation and the introduction of the psicrometric coefficient to use, its make that each station can have the suitable psicrometric chart keeping in mind the instrumentation and the characteristics of the area of location of the same one. Also, can have facilities to import text files for later on to be plotted, it allowed to correlate the absorbed dose rate in air due to the environmental gamma radiation, besides of the temperature and the humidity, with the tension of the water steam, the temperature of the dew point and the saturation deficit. (Author)

  18. Development of CHARMM-Compatible Force-Field Parameters for Cobalamin and Related Cofactors from Quantum Mechanical Calculations.

    Science.gov (United States)

    Pavlova, Anna; Parks, Jerry M; Gumbart, James C

    2018-02-13

    Corrinoid cofactors such as cobalamin are used by many enzymes and are essential for most living organisms. Therefore, there is broad interest in investigating cobalamin-protein interactions with molecular dynamics simulations. Previously developed parameters for cobalamins are based mainly on crystal structure data. Here, we report CHARMM-compatible force field parameters for several corrinoids developed from quantum mechanical calculations. We provide parameters for corrinoids in three oxidation states, Co 3+ , Co 2+ , and Co 1+ , and with various axial ligands. Lennard-Jones parameters for the cobalt center in the Co(II) and Co(I) states were optimized using a helium atom probe, and partial atomic charges were obtained with a combination of natural population analysis (NPA) and restrained electrostatic potential (RESP) fitting approaches. The Force Field Toolkit was used to optimize all bonded terms. The resulting parameters, determined solely from calculations of cobalamin alone or in water, were then validated by assessing their agreement with density functional theory geometries and by analyzing molecular dynamics simulation trajectories of several corrinoid proteins for which X-ray crystal structures are available. In each case, we obtained excellent agreement with the reference data. In comparison to previous CHARMM-compatible parameters for cobalamin, we observe a better agreement for the fold angle and lower RMSD in the cobalamin binding site. The approach described here is readily adaptable for developing CHARMM-compatible force-field parameters for other corrinoids or large biomolecules.

  19. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  20. Human errors and work performance in a nuclear power plant control room: associations with work-related factors and behavioral coping

    International Nuclear Information System (INIS)

    Kecklund, L.J.; Svenson, O.

    1997-01-01

    The present study investigated the relationships between the operator's appraisal of his own work situation and the quality of his own work performance, as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality, and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. (Author)

  1. [Evaluation of administration errors of injectable drugs in neonatology].

    Science.gov (United States)

    Cherif, A; Sayadi, M; Ben Hmida, H; Ben Ameur, K; Mestiri, K

    2015-11-01

    Use of injectable drugs in newborns represents more than 90% of prescriptions and requires special precautions in order to ensure more safety and efficiency. The aim of this study is to gather errors relating to the administration of injectable drugs and to suggest corrective actions. This descriptive and transversal study has evaluated 300 injectable drug administrations in a neonatology unit. Two hundred and sixty-one administrations have contained an error. Data are collected by direct observations of administrative act. Errors observed are: an inappropriate mixture (2.6% of cases); an incorrect delivery rate (33.7% of cases); incorrect dilutions (26.7% of cases); error in calculation of the dose to be injected (16.7% of cases); error while sampling small volumes (6.3% of cases); error or omission of administration schedule (1% of cases). These data have enabled us to evaluate administration of injectable drugs in neonatology. Different types of errors observed could be a source of therapeutic inefficiency, extended lengths of stay or iatrogenic drug. Following these observations, corrective actions have been undertaken by pharmacists and consist of: organizing training sessions for nursing; developing an explanatory guide for dilution and administration of injectable medicines, which was made available to the clinical service. Collaborative strategies doctor-nurse-pharmacist can help to reduce errors in the medication process especially during his administration. It permits improvement of injectable drugs use, offering more security and better efficiency and contribute to guarantee ideal therapy for patients. Copyright © 2015. Published by Elsevier Masson SAS.

  2. SU-C-BRC-05: Monte Carlo Calculations to Establish a Simple Relation of Backscatter Dose Enhancement Around High-Z Dental Alloy to Its Atomic Number

    Energy Technology Data Exchange (ETDEWEB)

    Utsunomiya, S; Kushima, N; Katsura, K; Tanabe, S; Hayakawa, T; Sakai, H; Yamada, T; Takahashi, H; Abe, E; Wada, S; Aoyama, H [Niigata University, Niigata (Japan)

    2016-06-15

    Purpose: To establish a simple relation of backscatter dose enhancement around a high-Z dental alloy in head and neck radiation therapy to its average atomic number based on Monte Carlo calculations. Methods: The PHITS Monte Carlo code was used to calculate dose enhancement, which is quantified by the backscatter dose factor (BSDF). The accuracy of the beam modeling with PHITS was verified by comparing with basic measured data namely PDDs and dose profiles. In the simulation, a high-Z alloy of 1 cm cube was embedded into a tough water phantom irradiated by a 6-MV (nominal) X-ray beam of 10 cm × 10 cm field size of Novalis TX (Brainlab). The ten different materials of high-Z alloys (Al, Ti, Cu, Ag, Au-Pd-Ag, I, Ba, W, Au, Pb) were considered. The accuracy of calculated BSDF was verified by comparing with measured data by Gafchromic EBT3 films placed at from 0 to 10 mm away from a high-Z alloy (Au-Pd-Ag). We derived an approximate equation to determine the relation of BSDF and range of backscatter to average atomic number of high-Z alloy. Results: The calculated BSDF showed excellent agreement with measured one by Gafchromic EBT3 films at from 0 to 10 mm away from the high-Z alloy. We found the simple linear relation of BSDF and range of backscatter to average atomic number of dental alloys. The latter relation was proven by the fact that energy spectrum of backscatter electrons strongly depend on average atomic number. Conclusion: We found a simple relation of backscatter dose enhancement around high-Z alloys to its average atomic number based on Monte Carlo calculations. This work provides a simple and useful method to estimate backscatter dose enhancement from dental alloys and corresponding optimal thickness of dental spacer to prevent mucositis effectively.

  3. Lattice energy calculation - A quick tool for screening of cocrystals and estimation of relative solubility. Case of flavonoids

    Science.gov (United States)

    Kuleshova, L. N.; Hofmann, D. W. M.; Boese, R.

    2013-03-01

    Cocrystals (or multicomponent crystals) have physico-chemical properties that are different from crystals of pure components. This is significant in drug development, since the desired properties, e.g. solubility, stability and bioavailability, can be tailored by binding two substances into a single crystal without chemical modification of an active component. Here, the FLEXCRYST program suite, implemented with a data mining force field, was used to estimate the relative stability and, consequently, the relative solubility of cocrystals of flavonoids vs their pure crystals, stored in the Cambridge Structural Database. The considerable potency of this approach for in silico screening of cocrystals, as well as their relative solubility, was demonstrated.

  4. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  5. Error compensation of single-antenna attitude determination using GNSS for Low-dynamic applications

    Science.gov (United States)

    Chen, Wen; Yu, Chao; Cai, Miaomiao

    2017-04-01

    GNSS-based single-antenna pseudo-attitude determination method has attracted more and more attention from the field of high-dynamic navigation due to its low cost, low system complexity, and no temporal accumulated errors. Related researches indicate that this method can be an important complement or even an alternative to the traditional sensors for general accuracy requirement (such as small UAV navigation). The application of single-antenna attitude determining method to low-dynamic carrier has just started. Different from the traditional multi-antenna attitude measurement technique, the pseudo-attitude attitude determination method calculates the rotation angle of the carrier trajectory relative to the earth. Thus it inevitably contains some deviations comparing with the real attitude angle. In low-dynamic application, these deviations are particularly noticeable, which may not be ignored. The causes of the deviations can be roughly classified into three categories, including the measurement error, the offset error, and the lateral error. Empirical correction strategies for the formal two errors have been promoted in previous study, but lack of theoretical support. In this paper, we will provide quantitative description of the three type of errors and discuss the related error compensation methods. Vehicle and shipborne experiments were carried out to verify the feasibility of the proposed correction methods. Keywords: Error compensation; Single-antenna; GNSS; Attitude determination; Low-dynamic

  6. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 and Department of Physics, Oklahoma State University, Stillwater, Oklahoma 74078-3072 (United States); Johnson, Randall; Larson, Gary [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 (United States)

    2016-06-15

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  7. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    International Nuclear Information System (INIS)

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-01-01

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  8. Application of nomograms to calculate radiography parameters

    International Nuclear Information System (INIS)

    Voronin, S.A.; Orlov, K.P.; Petukhov, V.I.; Khomchenkov, Yu.F.; Meshalkin, I.A.; Grachev, A.V.; Akopov, V.'S.; Majorov, A.N.

    1979-01-01

    The method of calculation of radiography parameters with the help of nomograms usable for practical application under laboratory and industrial conditions, is proposed. Nomograms are developed for determining the following parameters: relative sensitivity, general non-definition of image, permissible difference of blackening density between the centre and edge of the picture (ΔD), picture contrast, focus distance, item thickness, radiation-physical parameter, dose build up factor, groove dimension and error. An experimental test has been carried out for evaluating the results, obtained with nomograms. Steel items from 25 to 79 mm thick have been subjected to testing 191 Ir has been used as a source. Comparison of calculation and experimental results has shown the discrepancy in sensitivity values, caused by ΔDsub(min) apriori index and the error, inherent in graphical plotting on a nomogram

  9. Evaluation of three algorithms to calculate the relative renal function with {sup 99}Tc-DTPA; Evaluation de trois algorithmes pour calculer la fonction renale relative au {sup 99}Tc-DTPA

    Energy Technology Data Exchange (ETDEWEB)

    Charfeddine, S.; Maaloul, M.; Kallel, F.; Chtourou, K.; Guermazi, F. [EPS Habib Bourguiba, Service de Medecine Nucleaire, Sfax (Tunisia)

    2006-06-15

    The aim of our study is to estimate the reproducibility and the exactitude of three algorithms to determine with {sup 99m}Tc-DTPA the relative function of each kidney. Methods: a prospective study was carried out in voluntary patients. Reproducibility was studied in 11 patients who underwent had two examinations with {sup 99m}Tc-DTPA. Exactitude was evaluated in 35 patients who had an additional scintigraphy with {sup 99m}Tc-DMSA taken as a reference. To determine the relative renal function with {sup 99m}Tc-DTPA, three algorithms using various background subtraction methods and time intervals were applied. Results and conclusion: the method of the integral was the most reproducible and exact. It was little influenced by the choice of the interval of time. The reproducibility and the exactitude of the Patlak method were worse, especially in case of renal insufficiency or hydronephrosis. A high background and poor counting statistics explain why Patlak was less powerful with {sup 99m}Tc-DTPA than with {sup 99m}Tc-MAG3. The method of the slopes should not be recommended any more. (author)

  10. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of -0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root...... validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder......-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose...

  12. Influence of the turbulence typing scheme upon the cumulative frequency distribution of the calculated relative concentrations for different averaging times

    Energy Technology Data Exchange (ETDEWEB)

    Kretzschmar, J.G.; Mertens, I.

    1984-01-01

    Over the period 1977-1979, hourly meteorological measurements at the Nuclear Energy Research Centre, Mol, Belgium and simultaneous synoptic observations at the nearby military airport of Kleine Brogel, have been compiled as input data for a bi-Gaussian dispersion model. The available information has first of all been used to determine hourly stability classes in ten widely used turbulent diffusion typing schemes. Systematic correlations between different systems were rare. Twelve different combinations of diffusion typing scheme-dispersion parameters were then used for calculating cumulative frequency distributions of 1 h, 8 h, 16 h, 3 d, and 26 d average ground-level concentrations at receptors respectively at 500 m, 1 km, 2 km, 4 km and 8 km from continuous ground-level release and an elevated release at 100 m height. Major differences were noted as well in the extreme values, the higher percentiles, as in the annual mean concentrations. These differences are almost entirely due to the differences in the numercial values (as a function of distance) of the various sets of dispersion parameters actually in use for impact assessment studies. Dispersion parameter sets giving the lowest normalized ground-level concentration values for ground level releases give the highest results for elevated releases and vice versa. While it was illustrated once again that the applicability of a given set of dispersion parameters is restricted due to the specific conditions under which the given set derived, it was also concluded that systematic experimental work to validate certain assumptions is urgently needed.

  13. Accelerated line-by-line calculations for the radiative transfer of trace gases related to climate studies

    International Nuclear Information System (INIS)

    Clough, S.A.

    1993-01-01

    In the present study we are studying the effects of including carbon dioxide, ozone, methane, and the halocarbons in addition to water vapor in the radiating atmosphere. The study has focused on two principal issues: the effect on the spectral fluxes and cooling rates of carbon dioxide, ozone and the halocarbons at 1990 concentration levels and the change in fluxes and cooling rates as a consequence of the anticipated ten year change in the profiles of these species. For the latter study the water vapor profiles have been taken as invariant in time. The radiative line-by-line calculations using LBLRTM (Line-By-Line Radiative Transfer Model) have been performed for tropical (TRP), mid-latitude winter (MLW) and mid-latitude summer (MLS) model atmospheres. The halocarbons considered in the present study are CCl 4 , CFC-11, CFC-12 and CFC-22. In addition to considering the radiative effects of carbon dioxide at 355 ppM, the assumed current level, we have also obtained results for doubled carbon dioxide at 710 ppM. An important focus of the current research effort is the effect of the ozone depletion profile on atmospheric radiative effects

  14. The use of phenological data to calculate chilling units in Olea europaea L. in relation to the onset of reproduction

    Science.gov (United States)

    Orlandi, F.; Fornaciari, M.; Romano, B.

    2002-02-01

    The aim of this study was to develop a practical method to evaluate the effective relationship between the amount of winter chilling and the response expressed as the spring reproductive re-starting dates in the olive ( Olea europaea L.). Two olive cultivars growing in a special olive orchard in Umbria (central Italy) were studied over a 3-year period (1998-2000): the cultivar Ascolana, typical of central Italy, and the cultivar Giarraffa, typical of southern Italy. The spring reproductive re-starts were assessed using data from detailed phenological observations made on 60 trees of each cultivar in an effort to establish the exact date of reproductive bud swelling. The chilling phenomenon was evaluated by using 341 functions derived from a formula developed by researchers at Utah State University to calculate chilling units. The mathematical functions are defined, and show the very close relationship between the amount of winter chilling and the spring reproductive response in the two cultivars in the orchard studied. The results can be used to define the relationship between local climate and plant development, and the mathematical approach can be used to draw maps that can show the suitability of different cultivars on the basis of local climatic conditions.

  15. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  16. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  17. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  18. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  19. Numerical shoves and countershoves in electron transport calculations

    International Nuclear Information System (INIS)

    Filippone, W.L.

    1986-01-01

    The justification for applying the relatively complex (compared to S/sub n/) streaming ray (SR) algorithm to electron transport problems is its potential for doing rapid and accurate calculations. Because of the Lagrangian treatment of the cell-uncollided electrons, the only significant sources of error are the numerical treatment of the scattering kernel and the spatial differencing scheme used for the cell-collided electrons. Considerable progress has been made in reducing the former source of error. If one is willing to pay the price, the latter source of error can be reduced to any desired level by refining the mesh size or by using high-order differencing schemes. Here the method of numerical shoves and countershoves is introduced, which reduces spatial differencing errors using relatively little additional computational effort

  20. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  1. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  2. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  3. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  4. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  5. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  6. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  7. Iatrogenic medication errors in a paediatric intensive care unit in ...

    African Journals Online (AJOL)

    Errors most frequently encountered included failure to calculate rates of infusion and the conversion of mL to mEq or mL to mg for potassium, phenobarbitone and digoxin. Of the 117 children admitted, 111 (94.9%) were exposed to at least one medication error. Two or more medication errors occurred in 34.1% of cases.

  8. The costs of functional gastrointestinal disorders and related signs and symptoms in infants: a systematic literature review and cost calculation for England.

    Science.gov (United States)

    Mahon, James; Lifschitz, Carlos; Ludwig, Thomas; Thapar, Nikhil; Glanville, Julie; Miqdady, Mohamad; Saps, Miguel; Quak, Seng Hock; Lenoir Wijnkoop, Irene; Edwards, Mary; Wood, Hannah; Szajewska, Hania

    2017-11-14

    To estimate the cost of functional gastrointestinal disorders (FGIDs) and related signs and symptoms in infants to the third party payer and to parents. To estimate the cost of illness (COI) of infant FGIDs, a two-stage process was applied: a systematic literature review and a COI calculation. As no pertinent papers were found in the systematic literature review, a 'de novo' analysis was performed. For the latter, the potential costs for the third party payer (the National Health Service (NHS) in England) and for parents/carers for the treatment of FGIDs in infants were calculated, by using publicly available data. In constructing the calculation, estimates and assumptions (where necessary) were chosen to provide a lower bound (minimum) of the potential overall cost. In doing so, the interpretation of the calculation is that the true COI can be no lower than that estimated. Our calculation estimated that the total costs of treating FGIDs in infants in England were at least £72.3 million per year in 2014/2015 of which £49.1 million was NHS expenditure on prescriptions, community care and hospital treatment. Parents incurred £23.2 million in costs through purchase of over the counter remedies. The total cost presented here is likely to be a significant underestimate as only lower bound estimates were used where applicable, and for example, costs of alternative therapies, inpatient treatments or diagnostic tests, and time off work by parents could not be adequately estimated and were omitted from the calculation. The number and kind of prescribed products and products sold over the counter to treat FGIDs suggest that there are gaps between treatment guidelines, which emphasise parental reassurance and nutritional advice, and their implementation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  10. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  11. Comparing different error-conditions in film dosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2007-01-01

    In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error-conditions are recognised, such as errors that have an influence on the ability to make a correct evaluation of the dose value. In this project a comparison has been carried out to examine how two individual monitoring services, IMS [National Inst. of Radiation Hygiene (Denmark) (NIRH) and National Research Centre for Environment and Health (Germany) (GSF)], from two different EU countries mark their dosemeters. The IMS are different in size, type of customers and issuing period, but both use films as their primary dosemeters. The error-conditions examined are dosemeters exposed to moisture or light, contaminated dosemeters, films exposed outside the badge, missing filters in the badge, films inserted incorrectly in the badge and dosemeters not returned or returned too late to the IMS. The data are collected for the year 2003 where NIRH evaluated ∼50,000 and GSF ∼1.4 million film dosemeters. The percentage of film dosemeters is calculated for each error-condition as well as the distribution among eight different employee categories, i.e. medicine, nuclear medicine, nuclear industry, industry, radiography, laboratories, veterinary and others. It turned out, that incorrect insertion of the film in the badge was the most common error-condition observed at both IMS and that veterinarians, as the employee category, generally have the highest number of errors. NIRH has a significantly higher relative number of dosemeters in most error-conditions than GSF, which perhaps reflects that a comparison is difficult due to different systemic and methodical differences between the IMS and countries, e.g. regulations and monitoring programs etc. Also the non-existence of a common categorisation method for employee categories contributes to make a comparison like this difficult. (authors)

  12. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  13. Criticality criteria for submissions based on calculations

    International Nuclear Information System (INIS)

    Burgess, M.H.

    1975-06-01

    Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)

  14. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  15. Calculation of relative tube/tube support plate displacements in steam generators under accident condition loads using non-linear dynamic analysis methodologies

    International Nuclear Information System (INIS)

    Smith, R.E.; Waisman, R.; Hu, M.H.; Frick, T.M.

    1995-01-01

    A non-linear analysis has been performed to determine relative motions between tubes and tube support plates (TSP) during a steam line break (SLB) event for steam generators. The SLB event results in blowdown of steam and water out of the steam generator. The fluid blowdown generates pressure drops across the TSPS, resulting in out-of-plane motion. The SLB induced pressure loads are calculated with a computer program that uses a drift-flux modeling of the two-phase flow. In order to determine the relative tube/TSP motions, a nonlinear dynamic time-history analysis is performed using a structural model that considers all of the significant component members relative to the tube support system. The dynamic response of the structure to the pressure loads is calculated using a special purpose computer program. This program links the various substructures at common degrees of freedom into a combined mass and stiffness matrix. The program accounts for structural non-linearities, including potential tube and TSP interaction at any given tube position. The program also accounts for structural damping as part of the dynamic response. Incorporating all of the above effects, the equations of motion are solved to give TSP displacements at the reduced set of DOF. Using the displacement results from the dynamic analysis, plate stresses are then calculated using the detailed component models. Displacements form the dynamic analysis are imposed as boundary conditions at the DOF locations, and the finite element program then solves for the overall distorted geometry. Calculations are also performed to assure that assumptions regarding elastic response of the various structural members and support points are valid

  16. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  17. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  18. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  19. Un aspect du calcul d'erreur sur les réserves en place d'un gisement : L'influence du nombre et de la disposition spatiale des puits One Aspect of Error Computing for Reserves in a Reservoir. Influence of Well Number and Spacing

    Directory of Open Access Journals (Sweden)

    Haas A.

    2006-11-01

    Full Text Available L'erreur sur l'évaluation des réserves en place d'un gisement d'hydrocarbures dépend de trois facteurs : - erreur aux puits sur la détermination des porosités et saturations; - erreur géostatistique d'extension au gisement des mesures effectuées aux puits ; - erreur géométrique d'évaluation de la surface ou du volume du gisement. Dans ce texte nous avons étudié l'influence du nombre et de la distribution des puits sur l'erreur géostatistique dans le cas d'un gisement fictif de forme elliptique. Nous nous sommes placés à différents niveaux de reconnaissance, depuis l'implantation d'un seul puits en position variable jusqu'à la couverture complète du gisement par une grille régulière de 48 puits... La méthode utilisée est le « krigeage » élaboré par G. MATHERON de l'École des Mines de Paris. Les calculs ont été réalisés à l'aide du programme KRIGEPACK développé par une association CFP-SNPA. L'erreur d'estimation dépend de la position des puits dans le gisement, de la plus ou moins grande continuité spatiale de la variable et des erreurs aux puits. L'erreur que l'on peut calculer par la statistique classique ne dépend que du nombre de puits et suivant le cas peut être trop importante (si les puits sont implantés de manière optimale ou au contraire trop faible (si les puits sont mal disposés. The error in evaluating reserves in place in a hydrocarbon reservoir depends on The following three factors I - an erorr in wells when determining porosities and saturations ; 2 - a geosfatistical error in extending well measurements ta the entire reservoir; 3 - a geometric error in evaluating the area or volume of the reservoir. This article studies the influence of well number and distribution on the geostatistical error in the case of an elliptically-shaped imaginary reservoir. Various levels of exploration are considered, from the existence of a single well in various positions to full coverage of the reservoir by a

  20. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    OpenAIRE

    Hoda Divsar; Robab Heydari

    2017-01-01

    The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learne...