WorldWideScience

Sample records for performance errors due

  1. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    Science.gov (United States)

    Barshi, Immanuel; Dempsey, Donna L.

    2016-01-01

    Substantial evidence supports the claim that inadequate training leads to performance errors. Barshi and Loukopoulos (2012) demonstrate that even a task as carefully developed and refined over many years as operating an aircraft can be significantly improved by a systematic analysis, followed by improved procedures and improved training (see also Loukopoulos, Dismukes, & Barshi, 2009a). Unfortunately, such a systematic analysis of training needs rarely occurs during the preliminary design phase, when modifications are most feasible. Training is often seen as a way to compensate for deficiencies in task and system design, which in turn increases the training load. As a result, task performance often suffers, and with it, the operators suffer and so does the mission. On the other hand, effective training can indeed compensate for such design deficiencies, and can even go beyond to compensate for failures of our imagination to anticipate all that might be needed when we send our crew members to go where no one else has gone before. Much of the research literature on training is motivated by current training practices aimed at current training needs. Although there is some experience with operations in extreme environments on Earth, there is no experience with long-duration space missions where crews must practice semi-autonomous operations, where ground support must accommodate significant communication delays, and where so little is known about the environment. Thus, we must develop robust methodologies and tools to prepare our crews for the unknown. The research necessary to support such an endeavor does not currently exist, but existing research does reveal general challenges that are relevant to long-duration, high-autonomy missions. The evidence presented here describes issues related to the risk of performance errors due to training deficiencies. Contributing factors regarding training deficiencies may pertain to organizational process and training programs for

  2. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  3. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  4. RAMs: the problem of transient errors due to alpha radiation

    International Nuclear Information System (INIS)

    Goujon, Pierre.

    1980-01-01

    Errors that remained unexplained for a long time have occurred with dynamic random access memories. It has been known since 1978 that they are due to stray alpha radiation. A good understanding of this phenomenon enables its effects to be neutralized and the reliability of the products to be guarantied [fr

  5. A method for analysing incidents due to human errors on nuclear installations

    International Nuclear Information System (INIS)

    Griffon, M.

    1980-01-01

    This paper deals with the development of a methodology adapted to a detailed analysis of incidents considered to be due to human errors. An identification of human errors and a search for their eventual multiple causes is then needed. They are categorized in eight classes: education and training of personnel, installation design, work organization, time and work duration, physical environment, social environment, history of the plant and performance of the operator. The method is illustrated by the analysis of a handling incident generated by multiple human errors. (author)

  6. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    Science.gov (United States)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  7. Error review: Can this improve reporting performance?

    International Nuclear Information System (INIS)

    Tudor, Gareth R.; Finlay, David B.

    2001-01-01

    AIM: This study aimed to assess whether error review can improve radiologists' reporting performance. MATERIALS AND METHODS: Ten Consultant Radiologists reported 50 plain radiographs, in which the diagnoses were established. Eighteen of the radiographs were normal, 32 showed an abnormality. The radiologists were shown their errors and then re-reported the series of radiographs after an interval of 4-5 months. The accuracy of the reports to the established diagnoses was assessed. Chi-square test was used to calculate the difference between the viewings. RESULTS: On re-reporting the radiographs, seven radiologists improved their accuracy score, two had a lower score and one radiologist showed no score difference. Mean accuracy pre-education was 82.2%, (range 78-92%) and post-education was 88%, (range 76-96%). Individually, two of the radiologists showed a statistically significant improvement post-education (P < 0.01,P < 0.05). Assessing the group as a whole, there was a trend for improvement post-education but this did not reach statistical significance. Assessing only the radiographs where errors were made on the initial viewing, for the group as a whole there was a 63% improvement post-education. CONCLUSION: We suggest that radiologists benefit from error review, although there was not a statistically significant improvement for the series of radiographs in total. This is partly explained by the fact that some radiologists gave incorrect responses post-education that had initially been correct, thus masking the effect of the educational intervention. Tudor, G.R. and Finlay, D.B. (2001

  8. Errors due to the cylindrical cell approximation in lattice calculations

    Energy Technology Data Exchange (ETDEWEB)

    Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1960-06-15

    It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)

  9. Maintenance strategies to reduce downtime due to machine positional errors

    OpenAIRE

    Shagluf, Abubaker; Longstaff, A.P.; Fletcher, S.

    2014-01-01

    Proceedings of Maintenance Performance Measurement and Management (MPMM) Conference 2014 Manufacturing strives to reduce waste and increase Overall Equipment Effectiveness (OEE). When managing machine tool maintenance a manufacturer must apply an appropriate decision technique in order to reveal hidden costs associated with production losses, reduce equipment downtime competentely and similiarly identify the machines performance. Total productive maintenance (TPM) is a maintenance progr...

  10. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  11. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    OpenAIRE

    Mitchell, Lewis; Carrassi, Alberto

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are describe...

  12. Theory of errors in Coriolis flowmeter readings due to compressibility of the fluid being metered

    OpenAIRE

    Kutin, Jože; Hemp, John

    2015-01-01

    The compressibility of fluids in a Coriolis mass flowmeter can cause errors in the meter's measurements of density and mass flow rate. These errors may be better described as errors due to the finite speed of sound in the fluid being metered, or due to the finite wavelength of sound at the operating frequency of the meter. In this paper, they are investigated theoretically and calculated to a first approximation (small degree of compressibility). The investigation is limited to straight beam-...

  13. FEL small signal gain reduction due to phase error of undulator

    International Nuclear Information System (INIS)

    Jia Qika

    2002-01-01

    The effects of undulator phase errors on the Free Electron Laser small signal gain is analyzed and discussed. The gain reduction factor due to the phase error is given analytically for low-gain regimes, it shows that degradation of the gain is similar to that of the spontaneous radiation, has a simple exponential relation with square of the rms phase error, and the linear variation part of phase error induces the position shift of maximum gain. The result also shows that the Madey's theorem still hold in the presence of phase error. The gain reduction factor due to the phase error for high-gain regimes also can be given in a simple way

  14. Bias Errors due to Leakage Effects When Estimating Frequency Response Functions

    Directory of Open Access Journals (Sweden)

    Andreas Josefsson

    2012-01-01

    Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.

  15. Performance Errors in Weight Training and Their Correction.

    Science.gov (United States)

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  16. Error framing effects on performance: cognitive, motivational, and affective pathways.

    Science.gov (United States)

    Steele-Johnson, Debra; Kalinoski, Zachary T

    2014-01-01

    Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.

  17. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  18. Consistent errors in first strand cDNA due to random hexamer mispriming.

    Directory of Open Access Journals (Sweden)

    Thomas P van Gurp

    Full Text Available Priming of random hexamers in cDNA synthesis is known to show sequence bias, but in addition it has been suggested recently that mismatches in random hexamer priming could be a cause of mismatches between the original RNA fragment and observed sequence reads. To explore random hexamer mispriming as a potential source of these errors, we analyzed two independently generated RNA-seq datasets of synthetic ERCC spikes for which the reference is known. First strand cDNA synthesized by random hexamer priming on RNA showed consistent position and nucleotide-specific mismatch errors in the first seven nucleotides. The mismatch errors found in both datasets are consistent in distribution and thermodynamically stable mismatches are more common. This strongly indicates that RNA-DNA mispriming of specific random hexamers causes these errors. Due to their consistency and specificity, mispriming errors can have profound implications for downstream applications if not dealt with properly.

  19. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  20. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    Science.gov (United States)

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. Optical losses due to tracking error estimation for a low concentrating solar collector

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón

    2015-01-01

    Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water

  2. Basic considerations in predicting error probabilities in human task performance

    International Nuclear Information System (INIS)

    Fleishman, E.A.; Buffardi, L.C.; Allen, J.A.; Gaskins, R.C. III

    1990-04-01

    It is well established that human error plays a major role in the malfunctioning of complex systems. This report takes a broad look at the study of human error and addresses the conceptual, methodological, and measurement issues involved in defining and describing errors in complex systems. In addition, a review of existing sources of human reliability data and approaches to human performance data base development is presented. Alternative task taxonomies, which are promising for establishing the comparability on nuclear and non-nuclear tasks, are also identified. Based on such taxonomic schemes, various data base prototypes for generalizing human error rates across settings are proposed. 60 refs., 3 figs., 7 tabs

  3. Rail-guided robotic end-effector position error due to rail compliance and ship motion

    NARCIS (Netherlands)

    Borgerink, Dian; Stegenga, J.; Brouwer, Dannis Michel; Woertche, H.J.; Stramigioli, Stefano

    2014-01-01

    A rail-guided robotic system is currently being designed for the inspection of ballast water tanks in ships. This robotic system will manipulate sensors toward the interior walls of the tank. In this paper, the influence of rail compliance on the end-effector position error due to ship movement is

  4. Decreased attention to object size information in scale errors performers.

    Science.gov (United States)

    Grzyb, Beata J; Cangelosi, Angelo; Cattani, Allegra; Floccia, Caroline

    2017-05-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children's decreased attention to object size information. This study investigated the attention to object size information in scale errors performers. Two groups of children aged 18-25 months (N=52) and 48-60 months (N=23) were tested in two consecutive tasks: an action task that replicated the original scale errors elicitation situation, and a looking task that involved watching on a computer screen actions performed with adequate to inadequate size object. Our key finding - that children performing scale errors in the action task subsequently pay less attention to size changes than non-scale errors performers in the looking task - suggests that the origins of scale errors in childhood operate already at the perceptual level, and not at the action level. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Active Mycobacterium Infection Due to Intramuscular BCG Administration Following Multi-Steps Medication Errors

    Directory of Open Access Journals (Sweden)

    MohammadReza Rafati

    2015-10-01

    Full Text Available Bacillus Calmette-Guérin (BCG is indicated for treatment of primary or relapsing flat urothelial cell carcinoma in situ (CIS of the urinary bladder. Disseminated infectious complications occasionally occur due to BCG as a vaccine and intravesical therapy.  Intramuscular (IM or Intravenous (IV administrations of BCG are rare medication errors which are more probable to produce systemic infections. This report presents 13 years old case that several steps medication errors occurred consequently from physician handwriting, pharmacy dispensing, nursing administration and patient family. The physician wrote βHCG instead of HCG in the prescription. βHCG was read as BCG by the pharmacy staff and 6 vials of intravesical BCG were administered IM twice a week for 3 consecutive weeks. The patient experienced fever and chills after each injection, but he was admitted 2 months after first IM administration of BCG with fever and pancytopenia. Unfortunately four month after using drug, during second admission duo to cellulitis at the sites of BCG injection the physicians diagnosed the medication error. Using handwritten prescription and inappropriate abbreviations, spending inadequate time for taking a brief medical history in pharmacy, lack of verifying name, dose and wrote before medication administration and lack of considering medication error as an important differential diagnosis had roles to occur this multi-steps medication error.

  6. Error rate performance of narrowband multilevel CPFSK signals

    Science.gov (United States)

    Ekanayake, N.; Fonseka, K. J. P.

    1987-04-01

    The paper presents a relatively simple method for analyzing the effect of IF filtering on the performance of multilevel FM signals. Using this method, the error rate performance of narrowband FM signals is analyzed for three different detection techniques, namely limiter-discriminator detection, differential detection and coherent detection followed by differential decoding. The symbol error probabilities are computed for a Gaussian IF filter and a second-order Butterworth IF filter. It is shown that coherent detection and differential decoding yields better performance than limiter-discriminator detection and differential detection, whereas two noncoherent detectors yield approximately identical performance.

  7. The nature of articulation errors in Egyptian Arabic-speaking children with velopharyngeal insufficiency due to cleft palate.

    Science.gov (United States)

    Abou-Elsaad, Tamer; Baz, Hemmat; Afsah, Omayma; Mansy, Alzahraa

    2015-09-01

    Even with early surgical repair, the majority of cleft palate children demonstrate articulation errors and have typical cleft palate speech. Was to determine the nature of articulation errors of Arabic consonants in Egyptian Arabic-speaking children with velopharyngeal insufficiency (VPI). Thirty Egyptian Arabic-speaking children with VPI due to cleft palate (whether primary repaired or secondary repaired) were studied. Auditory perceptual assessment (APA) of children speech was conducted. Nasopharyngoscopy was done to assess the velopharyngeal port (VPP) movements while the child was repeating speech tasks. Mansoura Arabic Articulation test (MAAT) was performed to analyze the consonants articulation of these children. The most frequent type of articulatory errors observed was substitution, more specifically, backing. Pharyngealization of anterior fricatives was the most frequent substitution, especially for the /s/ sound. The most frequent substituting sounds for other sounds were /ʔ/ followed by /k/ and /n/ sounds. Significant correlations were found between the degrees of the open nasality and VPP closure and the articulation errors. On the other hand, the sounds (/ʔ/,/ħ/,/ʕ/,/n/,/w/,/j/) were normally articulated in all studied group. The determination of articulation errors in VPI children could guide the therapists for designing appropriate speech therapy programs for these cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. On the evaluation of the sensitivity of SRAM-Based FPGA to errors due to natural radiation environment

    International Nuclear Information System (INIS)

    Bocquillon, Alexandre

    2009-01-01

    This work aims at designing a test methodology to analyze the effect of natural radiation on FPGA SRAM-based chip-sets. Study of likely errors due to single or multiple events occurring in the configuration memory will be based on fault-injection experiments performed with laser devices. It relies on both a description of scientific background and a description of complex architecture of FPGA SRAM-Based and usual testing apparatus. Fault injection experiments with laser are conducted on several classes of components in order to perform static tests of the configuration memory and identify the links with the application. It shows the organization and sensitivity of SRAM configuration cells. Criticality criteria for configuration bits have been specified following dynamic tests in protons accelerator, in regard to their impact on the application. From this classification was developed a predicting tool for critical error rate estimation. (author) [fr

  9. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    Science.gov (United States)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  10. Estimation of errors due to inhomogeneous distribution of radionuclides in lungs

    International Nuclear Information System (INIS)

    Pelled, O.; German, U.; Pollak, G.; Alfassi, Z.B.

    2006-01-01

    The uncertainty in the activity determination of uranium contamination due to real inhomogeneous distribution and assumption of homogenous distribution can reach more than one order of magnitude when using one detector in a set of 4 detectors covering most of the whole lungs. Using the information from several detectors may improve the accuracy, as obtained by summing the responses from the 3 or 4 detectors. However, even with this improvement, the errors are still very large, up to almost a factor of 10 when the analysis is based on the 92 keV energy peak and up to 7 for the 185 keV peak

  11. Study of Periodic Fabrication Error of Optical Splitter Device Performance

    OpenAIRE

    Ab-Rahman, Mohammad Syuhaimi; Ater, Foze Saleh; Jumari, Kasmiran; Mohammad, Rahmah

    2012-01-01

    In this paper, the effect of fabrication errors (FEs) on the performance of 1×4 optical power splitter is investigated in details. The FE, which is assumed to take regular shape, is considered in each section of the device. Simulation result show that FE has a significant effect on the output power especially when it occurs in coupling regions.

  12. Refractive errors and school performance in Brazzaville, Congo ...

    African Journals Online (AJOL)

    Background: Wearing glasses before ten years is becoming more common in developed countries. In black Africa, for cultural or irrational reasons, this attitude remains exceptional. This situation is a source of amblyopia and learning difficulties. Objective: To determine the role of refractive errors in school performance in ...

  13. Decreased attention to object size information in scale errors performers

    NARCIS (Netherlands)

    Grzyb, B.J.; Cangelosi, A.; Cattani, A.; Floccia, C.

    2017-01-01

    Young children sometimes make serious attempts to perform impossible actions on miniature objects as if they were full-size objects. The existing explanations of these curious action errors assume (but never explicitly tested) children’s decreased attention to object size information. This study

  14. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance

    International Nuclear Information System (INIS)

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-01-01

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s -1 ) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s -1 ) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s -1 ) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm -1 of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will

  15. Evaluation of alignment error due to a speed artifact in stereotactic ultrasound image guidance.

    Science.gov (United States)

    Salter, Bill J; Wang, Brian; Szegedi, Martin W; Rassiah-Szegedi, Prema; Shrieve, Dennis C; Cheng, Roger; Fuss, Martin

    2008-12-07

    Ultrasound (US) image guidance systems used in radiotherapy are typically calibrated for soft tissue applications, thus introducing errors in depth-from-transducer representation when used in media with a different speed of sound propagation (e.g. fat). This error is commonly referred to as the speed artifact. In this study we utilized a standard US phantom to demonstrate the existence of the speed artifact when using a commercial US image guidance system to image through layers of simulated body fat, and we compared the results with calculated/predicted values. A general purpose US phantom (speed of sound (SOS) = 1540 m s(-1)) was imaged on a multi-slice CT scanner at a 0.625 mm slice thickness and 0.5 mm x 0.5 mm axial pixel size. Target-simulating wires inside the phantom were contoured and later transferred to the US guidance system. Layers of various thickness (1-8 cm) of commercially manufactured fat-simulating material (SOS = 1435 m s(-1)) were placed on top of the phantom to study the depth-related alignment error. In order to demonstrate that the speed artifact is not caused by adding additional layers on top of the phantom, we repeated these measurements in an identical setup using commercially manufactured tissue-simulating material (SOS = 1540 m s(-1)) for the top layers. For the fat-simulating material used in this study, we observed the magnitude of the depth-related alignment errors resulting from the speed artifact to be 0.7 mm cm(-1) of fat imaged through. The measured alignment errors caused by the speed artifact agreed with the calculated values within one standard deviation for all of the different thicknesses of fat-simulating material studied here. We demonstrated the depth-related alignment error due to the speed artifact when using US image guidance for radiation treatment alignment and note that the presence of fat causes the target to be aliased to a depth greater than it actually is. For typical US guidance systems in use today, this will

  16. Determination of corrosion rate of reinforcement with a modulated guard ring electrode; analysis of errors due to lateral current distribution

    International Nuclear Information System (INIS)

    Wojtas, H.

    2004-01-01

    The main source of errors in measuring the corrosion rate of rebars on site is a non-uniform current distribution between the small counter electrode (CE) on the concrete surface and the large rebar network. Guard ring electrodes (GEs) are used in an attempt to confine the excitation current within a defined area. In order to better understand the functioning of modulated guard ring electrode and to assess its effectiveness in eliminating errors due to lateral spread of current signal from the small CE, measurements of the polarisation resistance performed on a concrete beam have been numerically simulated. Effect of parameters such as rebar corrosion activity, concrete resistivity, concrete cover depth and size of the corroding area on errors in the estimation of polarisation resistance of a single rebar has been examined. The results indicate that modulated GE arrangement fails to confine the lateral spread of the CE current within a constant area. Using the constant diameter of confinement for the calculation of corrosion rate may lead to serious errors when test conditions change. When high corrosion activity of rebar and/or local corrosion occur, the use of the modulated GE confinement may lead to significant underestimation of the corrosion rate

  17. Evaluation and mitigation of potential errors in radiochromic film dosimetry due to film curvature at scanning.

    Science.gov (United States)

    Palmer, Antony L; Bradley, David A; Nisbet, Andrew

    2015-03-08

    This work considers a previously overlooked uncertainty present in film dosimetry which results from moderate curvature of films during the scanning process. Small film samples are particularly susceptible to film curling which may be undetected or deemed insignificant. In this study, we consider test cases with controlled induced curvature of film and with film raised horizontally above the scanner plate. We also evaluate the difference in scans of a film irradiated with a typical brachytherapy dose distribution with the film naturally curved and with the film held flat on the scanner. Typical naturally occurring curvature of film at scanning, giving rise to a maximum height 1 to 2 mm above the scan plane, may introduce dose errors of 1% to 4%, and considerably reduce gamma evaluation passing rates when comparing film-measured doses with treatment planning system-calculated dose distributions, a common application of film dosimetry in radiotherapy. The use of a triple-channel dosimetry algorithm appeared to mitigate the error due to film curvature compared to conventional single-channel film dosimetry. The change in pixel value and calibrated reported dose with film curling or height above the scanner plate may be due to variations in illumination characteristics, optical disturbances, or a Callier-type effect. There is a clear requirement for physically flat films at scanning to avoid the introduction of a substantial error source in film dosimetry. Particularly for small film samples, a compression glass plate above the film is recommended to ensure flat-film scanning. This effect has been overlooked to date in the literature.

  18. The analytical evolution of NLS solitons due to the numerical discretization error

    Science.gov (United States)

    Hoseini, S. M.; Marchant, T. R.

    2011-12-01

    Soliton perturbation theory is used to obtain analytical solutions describing solitary wave tails or shelves, due to numerical discretization error, for soliton solutions of the nonlinear Schrödinger equation. Two important implicit numerical schemes for the nonlinear Schrödinger equation, with second-order temporal and spatial discretization errors, are considered. These are the Crank-Nicolson scheme and a scheme, due to Taha [1], based on the inverse scattering transform. The first-order correction for the solitary wave tail, or shelf, is in integral form and an explicit expression is found for large time. The shelf decays slowly, at a rate of t^{-{1\\over 2}}, which is characteristic of the nonlinear Schrödinger equation. Singularity theory, usually used for combustion problems, is applied to the explicit large-time expression for the solitary wave tail. Analytical results are then obtained, such as the parameter regions in which qualitatively different types of solitary wave tails occur, the location of zeros and the location and amplitude of peaks. It is found that three different types of tail occur for the Crank-Nicolson and Taha schemes and that the Taha scheme exhibits some unusual symmetry properties, as the tails for left and right moving solitary waves are different. Optimal choices of the discretization parameters for the numerical schemes are also found, which minimize the amplitude of the solitary wave tail. The analytical solutions are compared with numerical simulations, and an excellent comparison is found.

  19. The analytical evolution of NLS solitons due to the numerical discretization error

    International Nuclear Information System (INIS)

    Hoseini, S M; Marchant, T R

    2011-01-01

    Soliton perturbation theory is used to obtain analytical solutions describing solitary wave tails or shelves, due to numerical discretization error, for soliton solutions of the nonlinear Schrödinger equation. Two important implicit numerical schemes for the nonlinear Schrödinger equation, with second-order temporal and spatial discretization errors, are considered. These are the Crank–Nicolson scheme and a scheme, due to Taha, based on the inverse scattering transform. The first-order correction for the solitary wave tail, or shelf, is in integral form and an explicit expression is found for large time. The shelf decays slowly, at a rate of t -1/2 , which is characteristic of the nonlinear Schrödinger equation. Singularity theory, usually used for combustion problems, is applied to the explicit large-time expression for the solitary wave tail. Analytical results are then obtained, such as the parameter regions in which qualitatively different types of solitary wave tails occur, the location of zeros and the location and amplitude of peaks. It is found that three different types of tail occur for the Crank–Nicolson and Taha schemes and that the Taha scheme exhibits some unusual symmetry properties, as the tails for left and right moving solitary waves are different. Optimal choices of the discretization parameters for the numerical schemes are also found, which minimize the amplitude of the solitary wave tail. The analytical solutions are compared with numerical simulations, and an excellent comparison is found. (paper)

  20. Self-assessment of human performance errors in nuclear operations

    International Nuclear Information System (INIS)

    Chambliss, K.V.

    1996-01-01

    One of the most important approaches to improving nuclear safety is to have an effective self-assessment process in place, whose cornerstone is the identification and improvement of human performance errors. Experience has shown that significant events usually have had precursors of human performance errors. If these precursors are left uncorrected or not understood, the symptoms recur and result in unanticipated events of greater safety significance. The Institute of Nuclear Power Operations (INPO) has been championing the cause of promoting excellence in human performance in the nuclear industry. INPO's report, open-quotes Excellence in Human Performance,close quotes emphasizes the importance of several factors that play a role in human performance. They include individual, supervisory, and organizational behaviors; real-time feedback that results in specific behavior to produce safe and reliable performance; and proactive measures that remove obstacles from excellent human performance. Zack Pate, chief executive officer and president of INPO, in his report, open-quotes The Control Room,close quotes provides an excellent discussion of serious events in the nuclear industry since 1994 and compares them with the results from a recent study by the National Transportation Safety Board of airline accidents in the 12-yr period from 1978 to 1990 to draw some common themes that relate to human performance issues in the control room

  1. Evaluation of linear registration algorithms for brain SPECT and the errors due to hypoperfusion lesions

    International Nuclear Information System (INIS)

    Radau, Perry E.; Slomka, Piotr J.; Julin, Per; Svensson, Leif; Wahlund, Lars-Olof

    2001-01-01

    The semiquantitative analysis of perfusion single-photon emission computed tomography (SPECT) images requires a reproducible, objective method. Automated spatial standardization (registration) of images is a prerequisite to this goal. A source of registration error is the presence of hypoperfusion defects, which was evaluated in this study with simulated lesions. The brain perfusion images measured by 99m Tc-HMPAO SPECT from 21 patients with probable Alzheimer's disease and 35 control subjects were retrospectively analyzed. An automatic segmentation method was developed to remove external activity. Three registration methods, robust least squares, normalized mutual information (NMI), and count difference were implemented and the effects of simulated defects were compared. The tested registration methods required segmentation of the cerebrum from external activity, and the automatic and manual methods differed by a three-dimensional displacement of 1.4±1.1 mm. NMI registration proved to be least adversely effected by simulated defects with 3 mm average displacement caused by severe defects. The error in quantifying the patient-template parietal ratio due to misregistration was 2.0% for large defects (70% hypoperfusion) and 0.5% for smaller defects (85% hypoperfusion)

  2. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  3. Bit Error Rate Due to Misalignment of Earth Station Antenna Pointing to Satellite

    Directory of Open Access Journals (Sweden)

    Wahyu Pamungkas

    2010-04-01

    Full Text Available One problem causing reduction of energy in satellite communications system is the misalignment of earth station antenna pointing to satellite. Error in pointing would affect the quality of information signal to energy bit in earth station. In this research, error in pointing angle occurred only at receiver (Rx antenna, while the transmitter (Tx antennas precisely point to satellite. The research was conducted towards two satellites, namely TELKOM-1 and TELKOM-2. At first, measurement was made by directing Tx antenna precisely to satellite, resulting in an antenna pattern shown by spectrum analyzer. The output from spectrum analyzers is drawn with the right scale to describe swift of azimuth and elevation pointing angle towards satellite. Due to drifting from the precise pointing, it influenced the received link budget indicated by pattern antenna. This antenna pattern shows reduction of power level received as a result of pointing misalignment. As a conclusion, the increasing misalignment of pointing to satellite would affect in the reduction of received signal parameters link budget of down-link traffic.

  4. Climbing fibers predict movement kinematics and performance errors.

    Science.gov (United States)

    Streng, Martha L; Popa, Laurentiu S; Ebner, Timothy J

    2017-09-01

    Requisite for understanding cerebellar function is a complete characterization of the signals provided by complex spike (CS) discharge of Purkinje cells, the output neurons of the cerebellar cortex. Numerous studies have provided insights into CS function, with the most predominant view being that they are evoked by error events. However, several reports suggest that CSs encode other aspects of movements and do not always respond to errors or unexpected perturbations. Here, we evaluated CS firing during a pseudo-random manual tracking task in the monkey ( Macaca mulatta ). This task provides extensive coverage of the work space and relative independence of movement parameters, delivering a robust data set to assess the signals that activate climbing fibers. Using reverse correlation, we determined feedforward and feedback CSs firing probability maps with position, velocity, and acceleration, as well as position error, a measure of tracking performance. The direction and magnitude of the CS modulation were quantified using linear regression analysis. The major findings are that CSs significantly encode all three kinematic parameters and position error, with acceleration modulation particularly common. The modulation is not related to "events," either for position error or kinematics. Instead, CSs are spatially tuned and provide a linear representation of each parameter evaluated. The CS modulation is largely predictive. Similar analyses show that the simple spike firing is modulated by the same parameters as the CSs. Therefore, CSs carry a broader array of signals than previously described and argue for climbing fiber input having a prominent role in online motor control. NEW & NOTEWORTHY This article demonstrates that complex spike (CS) discharge of cerebellar Purkinje cells encodes multiple parameters of movement, including motor errors and kinematics. The CS firing is not driven by error or kinematic events; instead it provides a linear representation of each

  5. Sporadic error probability due to alpha particles in dynamic memories of various technologies

    International Nuclear Information System (INIS)

    Edwards, D.G.

    1980-01-01

    The sensitivity of MOS memory components to errors induced by alpha particles is expected to increase with integration level. The soft error rate of a 65-kbit VMOS memory has been compared experimentally with that of three field-proven 16-kbit designs. The technological and design advantages of the VMOS RAM ensure an error rate which is lower than those of the 16-kbit memories. Calculation of the error probability for the 65-kbit RAM and comparison with the measurements show that for large duty cycles single particle hits lead to sensing errors and for small duty cycles cell errors caused by multiple hits predominate. (Auth.)

  6. Responsibility for reporting patient death due to hospital error in Japan when an error occurred at a referring institution.

    Science.gov (United States)

    Maeda, Shoichi; Starkey, Jay; Kamishiraki, Etsuko; Ikeda, Noriaki

    2013-12-01

    In Japan, physicians are required to report unexpected health care-associated patient deaths to the police. Patients needing to be transferred to another institution often have complex medical problems. If a medical error occurs, it may be either at the final or the referring institution. Some fear that liability will fall on the final institution regardless of where the error occurred or that the referring facility may oppose such reporting, leading to a failure to report to police or to recommend an autopsy. Little is known about the actual opinions of physicians and risk managers in this regard. The authors sent standardised, self-administered questionnaires to all hospitals in Japan that participate in the national general residency program. Most physicians and risk managers in Japan indicated that they would report a patient's death to the police where the patient has been transferred. Of those who indicated they would not report to the police, the majority still indicated they would recommend an autopsy

  7. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  8. A basic framework for the analysis of the human error potential due to the computerization in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Y. H.

    1999-01-01

    Computerization and its vivid benefits expected in the nuclear power plant design cannot be realized without verifying the inherent safety problems. Human error aspect is also included in the verification issues. The verification spans from the perception of the changes in operation functions such as automation to the unfamiliar experience of operators due to the interface change. Therefore, a new framework for human error analysis might capture both the positive and the negative effect of the computerization. This paper suggest a basic framework for error identification through the review of the existing human error studies and the experience of computerizations in nuclear power plants

  9. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  10. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  11. Method for evaluation of risk due to seismic related design and construction errors based on past reactor experience

    International Nuclear Information System (INIS)

    Gonzalez Cuesta, M.; Okrent, D.

    1985-01-01

    This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered

  12. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    1996-12-01

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  13. Global Vision Impairment and Blindness Due to Uncorrected Refractive Error, 1990-2010.

    Science.gov (United States)

    Naidoo, Kovin S; Leasher, Janet; Bourne, Rupert R; Flaxman, Seth R; Jonas, Jost B; Keeffe, Jill; Limburg, Hans; Pesudovs, Konrad; Price, Holly; White, Richard A; Wong, Tien Y; Taylor, Hugh R; Resnikoff, Serge

    2016-03-01

    The purpose of this systematic review was to estimate worldwide the number of people with moderate and severe visual impairment (MSVI; presenting visual acuity blindness (presenting visual acuity blind (7.9% increase from 1990) and 101.2 million (95% CI: 87.88-125.5 million) vision impaired due to URE (15% increase since 1990), while the global population increased by 30% (1990-2010). The all-age age-standardized prevalence of URE blindness decreased 33% from 0.2% (95% CI: 0.1-0.2%) in 1990 to 0.1% (95% CI: 0.1-0.1%) in 2010, whereas the prevalence of URE MSVI decreased 25% from 2.1% (95% CI: 1.6-2.4%) in 1990 to 1.5% (95% CI: 1.3-1.9%) in 2010. In 2010, URE contributed 20.9% (95% CI: 15.2-25.9%) of all blindness and 52.9% (95% CI: 47.2-57.3%) of all MSVI worldwide. The contribution of URE to all MSVI ranged from 44.2 to 48.1% in all regions except in South Asia which was at 65.4% (95% CI: 62-72%). : We conclude that in 2010, uncorrected refractive error continues as the leading cause of vision impairment and the second leading cause of blindness worldwide, affecting a total of 108 million people or 1 in 90 persons.

  14. Dynamic analysis of the Nova Target Chamber to assess alignment errors due to ambient noise

    International Nuclear Information System (INIS)

    McCallen, D.B.; Murray, R.C.

    1984-01-01

    We performed a study to determine the dynamic behavior of the Nova Target Chamber. We conducted a free vibration analysis to determine the natural frequencies of vibration and the corresponding modeshapes of the target chamber. Utilizing the free vibration results, we performed forced vibration analysis to predict the displacements of the chamber due to ambient vibration. The input support motion for the forced vibration analysis was defined by a white noise acceleration spectrum which was based on previous measurements of ground noise near the Nova site. A special purpose computer program was prepared to process the results of the forced vibration analysis. The program yields distances by which the lines of sight of the various laser beams miss the target as a result of ambient vibrations. We also performed additional estimates of miss distance to provide bounds on the results. A description of the finite element model of the chamber, the input spectrum, and the results of the analyses are included

  15. Initialization shock in decadal hindcasts due to errors in wind stress over the tropical Pacific

    Science.gov (United States)

    Pohlmann, Holger; Kröger, Jürgen; Greatbatch, Richard J.; Müller, Wolfgang A.

    2017-10-01

    Low prediction skill in the tropical Pacific is a common problem in decadal prediction systems, especially for lead years 2-5 which, in many systems, is lower than in uninitialized experiments. On the other hand, the tropical Pacific is of almost worldwide climate relevance through its teleconnections with other tropical and extratropical regions and also of importance for global mean temperature. Understanding the causes of the reduced prediction skill is thus of major interest for decadal climate predictions. We look into the problem of reduced prediction skill by analyzing the Max Planck Institute Earth System Model (MPI-ESM) decadal hindcasts for the fifth phase of the Climate Model Intercomparison Project and performing a sensitivity experiment in which hindcasts are initialized from a model run forced only by surface wind stress. In both systems, sea surface temperature variability in the tropical Pacific is successfully initialized, but most skill is lost at lead years 2-5. Utilizing the sensitivity experiment enables us to pin down the reason for the reduced prediction skill in MPI-ESM to errors in wind stress used for the initialization. A spurious trend in the wind stress forcing displaces the equatorial thermocline in MPI-ESM unrealistically. When the climate model is then switched into its forecast mode, the recovery process triggers artificial El Niño and La Niña events at the surface. Our results demonstrate the importance of realistic wind stress products for the initialization of decadal predictions.

  16. Review of a fluid resuscitation protocol: "fluid creep" is not due to nursing error.

    Science.gov (United States)

    Faraklas, Iris; Cochran, Amalia; Saffle, Jeffrey

    2012-01-01

    Recent reviews of burn resuscitation have included the suggestion that "fluid creep" may be influenced by practitioner error. Our center uses a nursing-driven resuscitation protocol that permits titration of fluid based on hourly urine output, including the addition of colloid when patients fail to respond appropriately. The purpose of this study was to examine protocol compliance. We reviewed 140 patients (26 children) with burns of ≥20% TBSA who received protocol-directed resuscitation from 2005 to 2010. We compared each patient's actual hourly fluid infusion with that predicted by the protocol. Sixty-seven patients (48%) completed resuscitation using crystalloid alone, whereas 73 patients required colloid supplementation. Groups did not differ in age, gender, weight, or time from injury to admission. Patients requiring colloid had larger median total burns (33.0 vs 23.5% TBSA) and full-thickness burns (15.5 vs 4.5% TBSA) and more inhalation injuries (60.3 vs 28.4%; P patients had median predicted requirements of 5.4 ml/kg/%TBSA. Crystalloid-only patients required fluid volumes close to Parkland predictions (4.7 ml/kg/%TBSA), whereas patients who received colloid required more fluid than the predicted volume (7.5 ml/kg/%TBSA). However, the hourly difference between the predicted and received fluids was a median of only 1.0% (interquartile range: -6.1 to 11.1%) and did not differ between groups. Pediatric patients had greater calculated differences than adults. Crystalloid patients exhibited higher urine outputs than colloid patients until colloid was started, suggesting that early over-resuscitation did not contribute to fluid creep. Adherence to our protocol for burn shock resuscitation was excellent overall. Fluid creep exhibited by more seriously injured patients was not due to nurses' failure to follow the protocol. This review has illuminated some opportunities for practice improvement, possibly using a computerized decision support system.

  17. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    International Nuclear Information System (INIS)

    Wang, B; Pan, B; Tao, R; Lubineau, G

    2017-01-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)

  18. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    KAUST Repository

    Wang, B

    2017-02-15

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  19. On a Test of Hypothesis to Verify the Operating Risk Due to Accountancy Errors

    Directory of Open Access Journals (Sweden)

    Paola Maddalena Chiodini

    2014-12-01

    Full Text Available According to the Statement on Auditing Standards (SAS No. 39 (AU 350.01, audit sampling is defined as “the application of an audit procedure to less than 100 % of the items within an account balance or class of transactions for the purpose of evaluating some characteristic of the balance or class”. The audit system develops in different steps: some are not susceptible to sampling procedures, while others may be held using sampling techniques. The auditor may also be interested in two types of accounting error: the number of incorrect records in the sample that overcome a given threshold (natural error rate, which may be indicative of possible fraud, and the mean amount of monetary errors found in incorrect records. The aim of this study is to monitor jointly both types of errors through an appropriate system of hypotheses, with particular attention to the second type error that indicates the risk of non-reporting errors overcoming the upper precision limits.

  20. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  1. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  2. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  3. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    Science.gov (United States)

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  4. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation

    OpenAIRE

    Alexandre Bryan Heinemann; Pepijn A.J. van Oort; Diogo Simões Fernandes; Aline de Holanda Nunes Maia

    2012-01-01

    Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs) data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, ...

  5. An analytical examination of distortions in power spectra due to sampling errors

    International Nuclear Information System (INIS)

    Njau, E.C.

    1982-06-01

    Distortions introduced into spectral energy densities of sinusoid signals as well as those of more complex signals through different forms of errors in signal sampling are developed and shown analytically. The approach we have adopted in doing this involves, firstly, developing for each type of signal and for the corresponding form of sampling errors an analytical expression that gives the faulty digitization process involved in terms of the features of the particular signal. Secondly, we take advantage of a method described elsewhere [IC/82/44] to relate, as much as possible, the true spectral energy density of the signal and the corresponding spectral energy density of the faulty digitization process. Thirdly, we then develop expressions which reveal the distortions that are formed in the directly computed spectral energy density of the digitized signal. It is evident from the formulations developed herein that the types of sampling errors taken into consideration may create false peaks and other distortions that are of non-negligible concern in computed power spectra. (author)

  6. Uncertainty of the 20th century sea-level rise due to vertical land motion errors

    Science.gov (United States)

    Santamaría-Gómez, Alvaro; Gravelle, Médéric; Dangendorf, Sönke; Marcos, Marta; Spada, Giorgio; Wöppelmann, Guy

    2017-09-01

    Assessing the vertical land motion (VLM) at tide gauges (TG) is crucial to understanding global and regional mean sea-level changes (SLC) over the last century. However, estimating VLM with accuracy better than a few tenths of a millimeter per year is not a trivial undertaking and many factors, including the reference frame uncertainty, must be considered. Using a novel reconstruction approach and updated geodetic VLM corrections, we found the terrestrial reference frame and the estimated VLM uncertainty may contribute to the global SLC rate error by ± 0.2 mmyr-1. In addition, a spurious global SLC acceleration may be introduced up to ± 4.8 ×10-3 mmyr-2. Regional SLC rate and acceleration errors may be inflated by a factor 3 compared to the global. The difference of VLM from two independent Glacio-Isostatic Adjustment models introduces global SLC rate and acceleration biases at the level of ± 0.1 mmyr-1 and 2.8 ×10-3 mmyr-2, increasing up to 0.5 mm yr-1 and 9 ×10-3 mmyr-2 for the regional SLC. Errors in VLM corrections need to be budgeted when considering past and future SLC scenarios.

  7. PERFORMANCE OF OPPORTUNISTIC SPECTRUM ACCESS WITH SENSING ERROR IN COGNITIVE RADIO AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    N. ARMI

    2012-04-01

    Full Text Available Sensing in opportunistic spectrum access (OSA has a responsibility to detect the available channel by performing binary hypothesis as busy or idle states. If channel is busy, secondary user (SU cannot access and refrain from data transmission. SU is allowed to access when primary user (PU does not use it (idle states. However, channel is sensed on imperfect communication link. Fading, noise and any obstacles existed can cause sensing errors in PU signal detection. False alarm detects idle states as a busy channel while miss-identification detects busy states as an idle channel. False detection makes SU refrain from transmission and reduces number of bits transmitted. On the other hand, miss-identification causes SU collide to PU transmission. This paper study the performance of OSA based on the greedy approach with sensing errors by the restriction of maximum collision probability allowed (collision threshold by PU network. The throughput of SU and spectrum capacity metric is used to evaluate OSA performance and make comparisons to those ones without sensing error as function of number of slot based on the greedy approach. The relations between throughput and signal to noise ratio (SNR with different collision probability as well as false detection with different SNR are presented. According to the obtained results show that CR users can gain the reward from the previous slot for both of with and without sensing errors. It is indicated by the throughput improvement as slot number increases. However, sensing on imperfect channel with sensing errors can degrade the throughput performance. Subsequently, the throughput of SU and spectrum capacity improves by increasing maximum collision probability allowed by PU network as well. Due to frequent collision with PU, the throughput of SU and spectrum capacity decreases at certain value of collision threshold. Computer simulation is used to evaluate and validate these works.

  8. Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution

    Science.gov (United States)

    Wild, O.; Prather, M. J.

    2005-12-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.

  9. Errors in short circuit measurements due to spectral mismatch between sunlight and solar simulators

    Science.gov (United States)

    Curtis, H. B.

    1976-01-01

    Errors in short circuit current measurement were calculated for a variety of spectral mismatch conditions. The differences in spectral irradiance between terrestrial sunlight and three types of solar simulator were studied, as well as the differences in spectral response between three types of reference solar cells and various test cells. The simulators considered were a short arc xenon lamp AMO sunlight simulator, an ordinary quartz halogen lamp, and an ELH-type quartz halogen lamp. Three types of solar cells studied were a silicon cell, a cadmium sulfide cell and a gallium arsenide cell.

  10. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    Science.gov (United States)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  11. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  12. Dosage uniformity problems which occur due to technological errors in extemporaneously prepared suppositories in hospitals and pharmacies

    Science.gov (United States)

    Kalmár, Éva; Lasher, Jason Richard; Tarry, Thomas Dean; Myers, Andrea; Szakonyi, Gerda; Dombi, György; Baki, Gabriella; Alexander, Kenneth S.

    2013-01-01

    The availability of suppositories in Hungary, especially in clinical pharmacy practice, is usually provided by extemporaneous preparations. Due to the known advantages of rectal drug administration, its benefits are frequently utilized in pediatrics. However, errors during the extemporaneous manufacturing process can lead to non-homogenous drug distribution within the dosage units. To determine the root cause of these errors and provide corrective actions, we studied suppository samples prepared with exactly known errors using both cerimetric titration and HPLC technique. Our results show that the most frequent technological error occurs when the pharmacist fails to use the correct displacement factor in the calculations which could lead to a 4.6% increase/decrease in the assay in individual dosage units. The second most important source of error can occur when the molding excess is calculated solely for the suppository base. This can further dilute the final suppository drug concentration causing the assay to be as low as 80%. As a conclusion we emphasize that the application of predetermined displacement factors in calculations for the formulation of suppositories is highly important, which enables the pharmacist to produce a final product containing exactly the determined dose of an active substance despite the different densities of the components. PMID:25161378

  13. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation

    Directory of Open Access Journals (Sweden)

    Alexandre Bryan Heinemann

    2012-01-01

    Full Text Available Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, biomass, leaf area (LAI and total accumulated solar radiation (SRA during the crop cycle. The accuracy of the 5 models for estimated daily solar radiation was similar, and it was not substantially different among sites. For water limited environments (no irrigation, crop model outputs yield, biomass and LAI was not sensitive for the uncertainties in radiation models studied here.

  14. Impact of error management culture on knowledge performance in professional service firms

    Directory of Open Access Journals (Sweden)

    Tabea Scheel

    2014-01-01

    Full Text Available Knowledge is the most crucial resource of the 21st century. For professional service firms (PSFs, knowledge represents the input as well as the output, and thus the fundamental base for performance. As every organization, PSFs have to deal with errors – and how they do that indicates their error culture. Considering the positive potential of errors (e.g., innovation, error management culture is positively related to organizational performance. This longitudinal quantitative study investigates the impact of error management culture on knowledge performance in four waves. The study was conducted in 131 PSFs, i.e. tax accounting offices. As a standard quality management system (QMS was assumed to moderate the relationship between error management culture and knowledge performance, offices' ISO 9000 certification was assessed. Error management culture correlated positively with knowledge performance at a significant level and predicted knowledge performance one year later. While the ISO 9000 certification correlated positively with knowledge performance, its assumed moderation of the relationship between error management culture and knowledge performance was not consistent. The process-oriented QMS seems to function as facilitator for the more behavior-oriented error management culture. However, the benefit of ISO 9000 certification for tax accounting remains to be proven. Given the impact of error management culture on knowledge performance, PSFs should focus on actively promoting positive attitudes towards errors.

  15. Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion

    Science.gov (United States)

    Blomquist, Mats; Wernersson, Ake V.

    1999-11-01

    When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.

  16. Uncertainty on PIV mean and fluctuating velocity due to bias and random errors

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Particle image velocimetry is a powerful and flexible fluid velocity measurement tool. In spite of its widespread use, the uncertainty of PIV measurements has not been sufficiently addressed to date. The calculation and propagation of local, instantaneous uncertainties on PIV results into the measured mean and Reynolds stresses are demonstrated for four PIV error sources that impact uncertainty through the vector computation: particle image density, diameter, displacement and velocity gradients. For the purpose of this demonstration, velocity data are acquired in a rectangular jet. Hot-wire measurements are compared to PIV measurements with velocity fields computed using two PIV algorithms. Local uncertainty on the velocity mean and Reynolds stress for these algorithms are automatically estimated using a previously published method. Previous work has shown that PIV measurements can become ‘noisy’ in regions of high shear as well as regions of small displacement. This paper also demonstrates the impact of these effects by comparing PIV data to data acquired using hot-wire anemometry, which does not suffer from the same issues. It is confirmed that flow gradients, large particle images and insufficient particle image displacements can result in elevated measurements of turbulence levels. The uncertainty surface method accurately estimates the difference between hot-wire and PIV measurements for most cases. The uncertainty based on each algorithm is found to be unique, motivating the use of algorithm-specific uncertainty estimates. (paper)

  17. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery☆

    Science.gov (United States)

    Arba-Mosquera, Samuel; Aslanides, Ioannis M.

    2012-01-01

    Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  18. Maintenance Strategies to Reduce Downtime Due to\\ud Machine Positional Errors

    OpenAIRE

    Shagluf, Abubaker; Longstaff, Andrew P.; Fletcher, Simon

    2014-01-01

    Manufacturing strives to reduce waste and increase\\ud Overall Equipment Effectiveness (OEE). When managing machine tool maintenance a manufacturer must apply an appropriate decision technique in order to reveal hidden costs associated with production losses, reduce equipment downtime\\ud competently and similarly identify the machines’ performance.\\ud Total productive maintenance (TPM) is a maintenance program that involves concepts for maintaining plant and equipment effectively. OEE is a pow...

  19. Evapotranspiration estimates and consequences due to errors in the determination of the net radiation and advective effects

    International Nuclear Information System (INIS)

    Oliveira, G.M. de; Leitao, M. de M.V.B.R.

    2000-01-01

    The objective of this study was to analyze the consequences in the evapotranspiration estimates (ET) during the growing cycle of a peanut crop due to the errors committed in the determination of the radiation balance (Rn), as well as those caused by the advective effects. This research was conducted at the Experimental Station of CODEVASF in an irrigated perimeter located in the city of Rodelas, BA, during the period of September to December of 1996. The results showed that errors of the order of 2.2 MJ m -2 d -1 in the calculation of Rn, and consequently in the estimate of ET, can occur depending on the time considered for the daily total of Rn. It was verified that the surrounding areas of the experimental field, as well as the areas of exposed soil within the field, contributed significantly to the generation of local advection of sensible heat, which resulted in the increase of the evapotranspiration [pt

  20. Prediction of DVH parameter changes due to setup errors for breast cancer treatment based on 2D portal dosimetry

    International Nuclear Information System (INIS)

    Nijsten, S. M. J. J. G.; Elmpt, W. J. C. van; Mijnheer, B. J.; Minken, A. W. H.; Persoon, L. C. G. G.; Lambin, P.; Dekker, A. L. A. J.

    2009-01-01

    Electronic portal imaging devices (EPIDs) are increasingly used for portal dosimetry applications. In our department, EPIDs are clinically used for two-dimensional (2D) transit dosimetry. Predicted and measured portal dose images are compared to detect dose delivery errors caused for instance by setup errors or organ motion. The aim of this work is to develop a model to predict dose-volume histogram (DVH) changes due to setup errors during breast cancer treatment using 2D transit dosimetry. First, correlations between DVH parameter changes and 2D gamma parameters are investigated for different simulated setup errors, which are described by a binomial logistic regression model. The model calculates the probability that a DVH parameter changes more than a specific tolerance level and uses several gamma evaluation parameters for the planning target volume (PTV) projection in the EPID plane as input. Second, the predictive model is applied to clinically measured portal images. Predicted DVH parameter changes are compared to calculated DVH parameter changes using the measured setup error resulting from a dosimetric registration procedure. Statistical accuracy is investigated by using receiver operating characteristic (ROC) curves and values for the area under the curve (AUC), sensitivity, specificity, positive and negative predictive values. Changes in the mean PTV dose larger than 5%, and changes in V 90 and V 95 larger than 10% are accurately predicted based on a set of 2D gamma parameters. Most pronounced changes in the three DVH parameters are found for setup errors in the lateral-medial direction. AUC, sensitivity, specificity, and negative predictive values were between 85% and 100% while the positive predictive values were lower but still higher than 54%. Clinical predictive value is decreased due to the occurrence of patient rotations or breast deformations during treatment, but the overall reliability of the predictive model remains high. Based on our

  1. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    International Nuclear Information System (INIS)

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-01-01

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift

  2. Liver Hematoma Presented as Midgut Volvulus Due To Medical Error: A Case Report

    Directory of Open Access Journals (Sweden)

    Karimi

    2016-02-01

    Full Text Available Introduction The use of an umbilical catheterization is a usual practice in neonatal units. The insertion of the catheter has potential complications. Case Presentation Here, we report on our observation of a seven-day-old female newborn admitted for an abdominal distention and vomiting bile. Initially, diagnosis was midgut volvulus, for which an operation was performed. During the surgery, no intestinal malrotation, mesenteric defect or atresia was observed. Postoperative diagnosis was abdominal wall hematoma and rand ligament and ileus, as well as, sub-capsular liver hematoma. The patient had been hospitalized at birth at a neonatal intensive care unit (NICU. With the appearance of icterus on the first day of life, at the NICU tried to insert the umbilical catheter that had been filed. Conclusions The complication found in the patient was the result of an aggressive act (the umbilical catheter insertion. This intervention should not be carried out unless there are clear indications, and if so, it should be done with much care.

  3. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    Science.gov (United States)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  4. Technical Note: Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry

    International Nuclear Information System (INIS)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frederic

    2010-01-01

    Purpose: The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Methods: Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side (''face up'' or ''face down'') were simulated. Results: Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%--3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Conclusions: Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  5. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    Science.gov (United States)

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  6. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  7. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    Science.gov (United States)

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, perrors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the

  8. PERFORMANCE OF THE ZERO FORCING PRECODING MIMO BROADCAST SYSTEMS WITH CHANNEL ESTIMATION ERRORS

    Institute of Scientific and Technical Information of China (English)

    Wang Jing; Liu Zhanli; Wang Yan; You Xiaohu

    2007-01-01

    In this paper, the effect of channel estimation errors upon the Zero Forcing (ZF) precoding Multiple Input Multiple Output Broadcast (MIMO BC) systems was studied. Based on the two kinds of Gaussian estimation error models, the performance analysis is conducted under different power allocation strategies. Analysis and simulation show that if the covariance of channel estimation errors is independent of the received Signal to Noise Ratio (SNR), imperfect channel knowledge deteriorates the sum capacity and the Bit Error Rate (BER) performance severely. However, under the situation of orthogonal training and the Minimum Mean Square Error (MMSE) channel estimation, the sum capacity and BER performance are consistent with those of the perfect Channel State Information (CSI)with only a performance degradation.

  9. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, A.; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face

  10. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  11. Calculation of stochastic broadening due to noise and field errors in the simple map in action-angle coordinates

    Science.gov (United States)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2008-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [1]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [2]. Action-angle coordinates for simple map can not be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories can not cross separatrix [2]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to magnetic noise and field errors. Mode numbers for noise + field errors from the DIII-D tokamak are used. Mode numbers are (m,n)=(3,1), (4,1), (6,2), (7,2), (8,2), (9,3), (10,3), (11,3), (12,3) [3]. The common amplitude δ is varied from 0.8X10-5 to 2.0X10-5. For this noise and field errors, the width of stochastic layer in simple map is calculated. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793 1. A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007). 2. O. Kerwin, A. Punjabi, and H. Ali, to appear in Physics of Plasmas. 3. A. Punjabi and H. Ali, P1.012, 35^th EPS Conference on Plasma Physics, June 9-13, 2008, Hersonissos, Crete, Greece.

  12. Wavefront-Error Performance Characterization for the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) Science Instruments

    Science.gov (United States)

    Aronstein, David L.; Smith, J. Scott; Zielinski, Thomas P.; Telfer, Randal; Tournois, Severine C.; Moore, Dustin B.; Fienup, James R.

    2016-01-01

    The science instruments (SIs) comprising the James Webb Space Telescope (JWST) Integrated Science Instrument Module (ISIM) were tested in three cryogenic-vacuum test campaigns in the NASA Goddard Space Flight Center (GSFC)'s Space Environment Simulator (SES). In this paper, we describe the results of optical wavefront-error performance characterization of the SIs. The wavefront error is determined using image-based wavefront sensing (also known as phase retrieval), and the primary data used by this process are focus sweeps, a series of images recorded by the instrument under test in its as-used configuration, in which the focal plane is systematically changed from one image to the next. High-precision determination of the wavefront error also requires several sources of secondary data, including 1) spectrum, apodization, and wavefront-error characterization of the optical ground-support equipment (OGSE) illumination module, called the OTE Simulator (OSIM), 2) plate scale measurements made using a Pseudo-Nonredundant Mask (PNRM), and 3) pupil geometry predictions as a function of SI and field point, which are complicated because of a tricontagon-shaped outer perimeter and small holes that appear in the exit pupil due to the way that different light sources are injected into the optical path by the OGSE. One set of wavefront-error tests, for the coronagraphic channel of the Near-Infrared Camera (NIRCam) Longwave instruments, was performed using data from transverse translation diversity sweeps instead of focus sweeps, in which a sub-aperture is translated andor rotated across the exit pupil of the system.Several optical-performance requirements that were verified during this ISIM-level testing are levied on the uncertainties of various wavefront-error-related quantities rather than on the wavefront errors themselves. This paper also describes the methodology, based on Monte Carlo simulations of the wavefront-sensing analysis of focus-sweep data, used to establish the

  13. Inflated applicants: attribution errors in performance evaluation by professionals.

    Directory of Open Access Journals (Sweden)

    Samuel A Swift

    Full Text Available When explaining others' behaviors, achievements, and failures, it is common for people to attribute too much influence to disposition and too little influence to structural and situational factors. We examine whether this tendency leads even experienced professionals to make systematic mistakes in their selection decisions, favoring alumni from academic institutions with high grade distributions and employees from forgiving business environments. We find that candidates benefiting from favorable situations are more likely to be admitted and promoted than their equivalently skilled peers. The results suggest that decision-makers take high nominal performance as evidence of high ability and do not discount it by the ease with which it was achieved. These results clarify our understanding of the correspondence bias using evidence from both archival studies and experiments with experienced professionals. We discuss implications for both admissions and personnel selection practices.

  14. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    Science.gov (United States)

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Performance degradation of space Stirling cryocoolers due to gas contamination

    Science.gov (United States)

    Liu, Xin-guang; Wu, Yi-nong; Yang, Shao-hua; Zhang, Xiao-ming; Lu, Guo-hua; Zhang, Li

    2011-08-01

    With extensive application of infrared detective techniques, Stirling cryocoolers, used as an active cooling source, have been developed vigorously in China. After the cooler's cooling performance can satisfy the mission's request, its reliability level is crucial for its application. Among all the possible failure mechanisms, gas contamination has been found to be the most notorious cause of cooler's performance degradation by failure analyses. To analyze the characteristic of gas contamination, some experiments were designed and carried out to quantitatively analyze the relationship between failure and performance. Combined with the test results and the outgassing characteristic of non-metal materials in the cryocooler, a degradation model of cooling performance was given by T(t)=T0+A[1-exp(-t/B)] under some assumptions, where t is the running time, T is the Kelvin cooling temperature, and T0, A, B are model parameters, which can be given by the least square method. Here T0 is the fitting initial cooling temperature, A is the maximum range of performance degradation, and B is the time dependent constant of degradation. But the model parameters vary when a cryocooler is running at different cooling temperature ranges, or it is treated by different cleaning process. In order to verify the applicability of the degradation model, data fit analysis on eight groups of cooler's lifetime test was carried out. The final work indicated this model fit well with the performance degradation of space Stirling cryocoolers due to gas contamination and this model could be used to predict or evaluation the cooler's lifetime. Gaseous contamination will not arouse severe performance degradation until the contaminants accumulate to a certain amount, but it could be fatal when it works. So it is more serious to the coolers whose lifetime is more than 10,000 h. The measures taken to control or minimize its damage were discussed as well. To the long-life cryocooler, internal materials

  16. Focusing performance of a multilayer Laue lens with layer placement error described by dynamical diffraction theory.

    Science.gov (United States)

    Hu, Lingfei; Chang, Guangcai; Liu, Peng; Zhou, Liang

    2015-07-01

    The multilayer Laue lens (MLL) is essentially a linear zone plate with large aspect ratio, which can theoretically focus hard X-rays to well below 1 nm with high efficiency when ideal structures are used. However, the focusing performance of a MLL depends heavily on the quality of the layers, especially the layer placement error which always exists in real MLLs. Here, a dynamical modeling approach, based on the coupled wave theory, is proposed to study the focusing performance of a MLL with layer placement error. The result of simulation shows that this method can be applied to various forms of layer placement error.

  17. Analysis of Human Error Types and Performance Shaping Factors in the Next Generation Main Control Room

    International Nuclear Information System (INIS)

    Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.

    2008-04-01

    Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis

  18. Kalman filter application to mitigate the errors in the trajectory simulations due to the lunar gravitational model uncertainty

    International Nuclear Information System (INIS)

    Gonçalves, L D; Rocco, E M; De Moraes, R V; Kuga, H K

    2015-01-01

    This paper aims to simulate part of the orbital trajectory of Lunar Prospector mission to analyze the relevance of using a Kalman filter to estimate the trajectory. For this study it is considered the disturbance due to the lunar gravitational potential using one of the most recent models, the LP100K model, which is based on spherical harmonics, and considers the maximum degree and order up to the value 100. In order to simplify the expression of the gravitational potential and, consequently, to reduce the computational effort required in the simulation, in some cases, lower values for degree and order are used. Following this aim, it is made an analysis of the inserted error in the simulations when using such values of degree and order to propagate the spacecraft trajectory and control. This analysis was done using the standard deviation that characterizes the uncertainty for each one of the values of the degree and order used in LP100K model for the satellite orbit. With knowledge of the uncertainty of the gravity model adopted, lunar orbital trajectory simulations may be accomplished considering these values of uncertainty. Furthermore, it was also used a Kalman filter, where is considered the sensor's uncertainty that defines the satellite position at each step of the simulation and the uncertainty of the model, by means of the characteristic variance of the truncated gravity model. Thus, this procedure represents an effort to approximate the results obtained using lower values for the degree and order of the spherical harmonics, to the results that would be attained if the maximum accuracy of the model LP100K were adopted. Also a comparison is made between the error in the satellite position in the situation in which the Kalman filter is used and the situation in which the filter is not used. The data for the comparison were obtained from the standard deviation in the velocity increment of the space vehicle. (paper)

  19. [Medication reconciliation: an innovative experience in an internal medicine unit to decrease errors due to inacurrate medication histories].

    Science.gov (United States)

    Pérennes, Maud; Carde, Axel; Nicolas, Xavier; Dolz, Manuel; Bihannic, René; Grimont, Pauline; Chapot, Thierry; Granier, Hervé

    2012-03-01

    An inaccurate medication history may prevent the discovery of a pre-admission iatrogenic event or lead to interrupted drug therapy during hospitalization. Medication reconciliation is a process that ensures the transfer of medication information at admission to the hospital. The aims of this prospective study were to evaluate the interest in clinical practice of this concept and the resources needed for its implementation. We chose to include patients aged 65 years or over admitted in the internal medicine unit between June and October 2010. We obtained an accurate list of each patient's home medications. This list was then compared with medication orders. All medication variances were classified as intended or unintended. An internist and a pharmacist classified the clinical importance of each unintended variance. Sixty-one patients (mean age: 78 ± 7.4 years) were included in our study. We identified 38 unintended discrepancies. The average number of unintended discrepancies was 0.62 per patient. Twenty-five patients (41%) had one or more unintended discrepancies at admission. The contact with the community pharmacist permitted us to identify 21 (55%) unintended discrepancies. The most common errors were the omission of a regularly used medication (76%) and an incorrect dosage (16%). Our intervention resulted in order changes by the physician for 30 (79%) unintended discrepancies. Fifty percent of the unintended variances were judged by the internist and 76% by the pharmacist to be clinically significant. The admission to the hospital is a critical transition point for the continuity of care in medication management. Medication reconciliation can identify and resolve errors due to inaccurate medication histories. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  20. Predicting the outcomes of performance error indicators on accreditation status in the nuclear power industry

    International Nuclear Information System (INIS)

    Wilson, P.A.

    1986-01-01

    The null hypothesis for this study suggested that there was no significant difference in the types of performance error indicators between accredited and non-accredited programs on the following types of indicators: (1) number of significant event reports per unit, (2) number of forced outages per unit, (3) number of unplanned automatic scrams per unit, and (4) amount of equivalent availability per unit. A sample of 90 nuclear power plants was selected for this study. Data were summarized from two data bases maintained by the Institute of Nuclear Power Operations. Results of this study did not support the research hypothesis. There was no significant difference between the accredited and non-accredited programs on any of the four performance error indicators. The primary conclusions of this include the following: (1) The four selected performance error indicators cannot be used individually or collectively to predict accreditation status in the nuclear power industry. (2) Annual performance error indicator ratings cannot be used to determine the effects of performance-based training on plant performance. (3) The four selected performance error indicators cannot be used to measure the effect of operator job performance on plant effectiveness

  1. Quantification of the flow measurement performance due to installation changes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Ana Luisa Auler da Silva [Petrobras Transporte S.A. (TRANSPETRO), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The objective is to present a criterion to identify and quantify improvements in the performance of flowmeters due to alterations in the design of a measurement system. The method was developed aiming at improvements in the operational systems, but is also useful in custody transfer systems. Take as base the estimate of uncertainty in the measurement systems, and seek a more intuitive way so as to aggregate the experience and knowledge available in the company. The identification of the measurand is carried out based on available information, the processing of data and the method of calculation. The improvements in the installation of measurement systems are given priority based on the measurement uncertainty of the CTPL (temperature and pressure correction factor) and the MF(meter factor). In the case of the CTPL, the influence of the pressure, temperature and density instruments in the results of the measurement system is evaluated, and in the case of the MF, the influence of the calibration system. The possibilities for calibrating the flowmeter in industry are presented, also detailing the calibration of the operational meter against custody transfer systems. The analysis of networks includes calculating the uncertainty in the closing of the network balance, calculation of the uncertainties and contributions of the measurement systems that form part of the network, and a comparison of improvements and costs, calculation for each alteration in the design of the measurement system, and its cost per improvement of 0.01% in the uncertainty of the closing. (author)

  2. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  3. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    Science.gov (United States)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  4. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    Science.gov (United States)

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  6. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  7. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    Science.gov (United States)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  8. SU-F-J-131: Reproducibility of Positioning Error Due to Temporarily Indwelled Urethral Catheter for Urethra-Sparing Prostate IMRT

    International Nuclear Information System (INIS)

    Hirose, K; Takai, Y; Sato, M; Hatayama, Y; Kawaguchi, H; Aoki, M; Akimoto, H; Komai, F; Souma, M; Obara, H; Suzuki, M

    2016-01-01

    Purpose: The purpose of this study was to prospectively assess the reproducibility of positioning errors due to temporarily indwelled catheter in urethra-sparing image-guided (IG) IMRT. Methods: Ten patients received urethra-sparing prostate IG-IMRT with implanted fiducials. After the first CT scan was performed in supine position, 6-Fr catheter was indwelled into urethra, and the second CT images were taken for planning. While the PTV received 80 Gy, 5% dose reduction was applied for the urethral PRV along the catheter. Additional CT scans were also performed at 5th and 30th fraction. Positions of interests (POIs) were set on posterior edge of prostate at beam isocenter level (POI1) and cranial and caudal edge of prostatic urethra on the post-indwelled CT images. POIs were copied into the pre-indwelled, 5th and 30th fraction’s CT images after fiducial matching on these CT images. The deviation of each POI between pre- and post-indwelled CT and the reproducibility of prostate displacement due to catheter were evaluated. Results: The deviation of POI1 caused by the indwelled catheter to the directions of RL/AP/SI (mm) was 0.20±0.27/−0.64±2.43/1.02±2.31, respectively, and the absolute distances (mm) were 3.15±1.41. The deviation tends to be larger if closer to the caudal edge of prostate. Compared with the pre-indwelled CT scan, a median displacement of all POIs (mm) were 0.3±0.2/2.2±1.1/2.0±2.6 in the post-indwelled, 0.4±0.4/3.4±2.1/2.3±2.6 in 5th, and 0.5±0.5/1.7±2.2/1.9±3.1 in 30th fraction’s CT scan with a similar data distribution. There were 6 patients with 5-mm-over displacement in AP and/or CC directions. Conclusion: Reproducibility of positioning errors due to temporarily indwelling catheter was observed. Especially in case of patients with unusually large shifts by indwelling catheter at the planning process, treatment planning should be performed by using the pre-indwelled CT images with transferred contour of the urethra identified by

  9. SU-F-J-131: Reproducibility of Positioning Error Due to Temporarily Indwelled Urethral Catheter for Urethra-Sparing Prostate IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Hirose, K; Takai, Y [Hirosaki University, Hirosaki (Japan); Southern Tohoku BNCT Research Center, Koriyama (Japan); Sato, M; Hatayama, Y; Kawaguchi, H; Aoki, M; Akimoto, H [Hirosaki University, Hirosaki (Japan); Komai, F; Souma, M; Obara, H; Suzuki, M [Hirosaki University Hospital, Hirosaki (Japan)

    2016-06-15

    Purpose: The purpose of this study was to prospectively assess the reproducibility of positioning errors due to temporarily indwelled catheter in urethra-sparing image-guided (IG) IMRT. Methods: Ten patients received urethra-sparing prostate IG-IMRT with implanted fiducials. After the first CT scan was performed in supine position, 6-Fr catheter was indwelled into urethra, and the second CT images were taken for planning. While the PTV received 80 Gy, 5% dose reduction was applied for the urethral PRV along the catheter. Additional CT scans were also performed at 5th and 30th fraction. Positions of interests (POIs) were set on posterior edge of prostate at beam isocenter level (POI1) and cranial and caudal edge of prostatic urethra on the post-indwelled CT images. POIs were copied into the pre-indwelled, 5th and 30th fraction’s CT images after fiducial matching on these CT images. The deviation of each POI between pre- and post-indwelled CT and the reproducibility of prostate displacement due to catheter were evaluated. Results: The deviation of POI1 caused by the indwelled catheter to the directions of RL/AP/SI (mm) was 0.20±0.27/−0.64±2.43/1.02±2.31, respectively, and the absolute distances (mm) were 3.15±1.41. The deviation tends to be larger if closer to the caudal edge of prostate. Compared with the pre-indwelled CT scan, a median displacement of all POIs (mm) were 0.3±0.2/2.2±1.1/2.0±2.6 in the post-indwelled, 0.4±0.4/3.4±2.1/2.3±2.6 in 5th, and 0.5±0.5/1.7±2.2/1.9±3.1 in 30th fraction’s CT scan with a similar data distribution. There were 6 patients with 5-mm-over displacement in AP and/or CC directions. Conclusion: Reproducibility of positioning errors due to temporarily indwelling catheter was observed. Especially in case of patients with unusually large shifts by indwelling catheter at the planning process, treatment planning should be performed by using the pre-indwelled CT images with transferred contour of the urethra identified by

  10. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects. PMID:25904890

  11. On the importance of Task 1 and error performance measures in PRP dual-task studies.

    Science.gov (United States)

    Strobach, Tilo; Schütz, Anja; Schubert, Torsten

    2015-01-01

    The psychological refractory period (PRP) paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and Task 2) are presented with variable stimulus onset asynchronies (SOAs) and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e., decreasing SOAs do not increase reaction times (RTs) and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates) show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/or error rates in Task 1). This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  12. On the importance of Task 1 and error performance measures in PRP dual-task studies

    Directory of Open Access Journals (Sweden)

    Tilo eStrobach

    2015-04-01

    Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.

  13. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    Science.gov (United States)

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  14. The Influence of Training Phase on Error of Measurement in Jump Performance.

    Science.gov (United States)

    Taylor, Kristie-Lee; Hopkins, Will G; Chapman, Dale W; Cronin, John B

    2016-03-01

    The purpose of this study was to calculate the coefficients of variation in jump performance for individual participants in multiple trials over time to determine the extent to which there are real differences in the error of measurement between participants. The effect of training phase on measurement error was also investigated. Six subjects participated in a resistance-training intervention for 12 wk with mean power from a countermovement jump measured 6 d/wk. Using a mixed-model meta-analysis, differences between subjects, within-subject changes between training phases, and the mean error values during different phases of training were examined. Small, substantial factor differences of 1.11 were observed between subjects; however, the finding was unclear based on the width of the confidence limits. The mean error was clearly higher during overload training than baseline training, by a factor of ×/÷ 1.3 (confidence limits 1.0-1.6). The random factor representing the interaction between subjects and training phases revealed further substantial differences of ×/÷ 1.2 (1.1-1.3), indicating that on average, the error of measurement in some subjects changes more than in others when overload training is introduced. The results from this study provide the first indication that within-subject variability in performance is substantially different between training phases and, possibly, different between individuals. The implications of these findings for monitoring individuals and estimating sample size are discussed.

  15. Effect of DM Actuator Errors on the WFIRST/AFTA Coronagraph Contrast Performance

    Science.gov (United States)

    Sidick, Erkin; Shi, Fang

    2015-01-01

    The WFIRST/AFTA 2.4 m space telescope currently under study includes a stellar coronagraph for the imaging and the spectral characterization of extrasolar planets. The coronagraph employs two sequential deformable mirrors (DMs) to compensate for phase and amplitude errors in creating dark holes. DMs are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Working with a low-order wavefront-sensor the DM that is conjugate to a pupil can also be used to correct low-order wavefront drift during a scientific observation. However, not all actuators in a DM have the same gain. When using such a DM in low-order wavefront sensing and control subsystem, the actuator gain errors introduce high-spatial frequency errors to the DM surface and thus worsen the contrast performance of the coronagraph. We have investigated the effects of actuator gain errors and the actuator command digitization errors on the contrast performance of the coronagraph through modeling and simulations, and will present our results in this paper.

  16. Errors due to non-uniform distribution of fat in dual X-ray absorptiometry of the lumbar spine

    International Nuclear Information System (INIS)

    Tothill, P.; Pye, D.W.

    1992-01-01

    Errors in spinal dual X-ray absorptiometry (DXA) were studied by analysing X-ray CT scans taken for diagnostic purposes on 20 patients representing a wide range of fat content. The mean difference between the fat thickness over the vertebral bodies and that over a background area in antero-posterior (AP) scanning was 6.7 ± 8.1 mm for men and 13.4 ± 4.7 mm for women. For AP scanning a non-uniform fat distribution leads to a mean overestimate of 0.029 g/cm 2 for men and 0.057 g/cm 2 for women. The error exceeded 0.1 g/cm 2 in 10% of slices. For lateral scanning the error exceeded 0.1 g/cm 2 (about 15% of normal) in a quarter of slices. (author)

  17. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  18. Performance degradation of integrated optical modulators due to electrical crosstalk

    NARCIS (Netherlands)

    Yao, W.; Gilardi, G.; Smit, M.K.; Wale, M.J.

    2016-01-01

    In this paper, we investigate electrical crosstalk in integrated Mach-Zehnder modulator arrays based on n-doped InP substrate and show that it can be the cause for transmitter performance degradations. In particular, a common ground return path between adjacent modulators can cause high coupling

  19. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  20. Temperature measurement error due to the effects of time varying magnetic fields on thermocouples with ferromagnetic thermoelements

    International Nuclear Information System (INIS)

    McDonald, D.W.

    1977-01-01

    Thermocouples with ferromagnetic thermoelements (iron, Alumel, Nisil) are used extensively in industry. We have observed the generation of voltage spikes within ferromagnetic wires when the wires are placed in an alternating magnetic field. This effect has implications for thermocouple thermometry, where it was first observed. For example, the voltage generated by this phenomenon will contaminate the thermocouple thermal emf, resulting in temperature measurement error

  1. Alignment error of mirror modules of advanced telescope for high-energy astrophysics due to wavefront aberrations

    Science.gov (United States)

    Zocchi, Fabio E.

    2017-10-01

    One of the approaches that is being tested for the integration of the mirror modules of the advanced telescope for high-energy astrophysics x-ray mission of the European Space Agency consists in aligning each module on an optical bench operated at an ultraviolet wavelength. The mirror module is illuminated by a plane wave and, in order to overcome diffraction effects, the centroid of the image produced by the module is used as a reference to assess the accuracy of the optical alignment of the mirror module itself. Among other sources of uncertainty, the wave-front error of the plane wave also introduces an error in the position of the centroid, thus affecting the quality of the mirror module alignment. The power spectral density of the position of the point spread function centroid is here derived from the power spectral density of the wave-front error of the plane wave in the framework of the scalar theory of Fourier diffraction. This allows the defining of a specification on the collimator quality used for generating the plane wave starting from the contribution to the error budget allocated for the uncertainty of the centroid position. The theory generally applies whenever Fourier diffraction is a valid approximation, in which case the obtained result is identical to that derived by geometrical optics considerations.

  2. Proactive error analysis of ultrasound-guided axillary brachial plexus block performance.

    LENUS (Irish Health Repository)

    O'Sullivan, Owen

    2012-07-13

    Detailed description of the tasks anesthetists undertake during the performance of a complex procedure, such as ultrasound-guided peripheral nerve blockade, allows elements that are vulnerable to human error to be identified. We have applied 3 task analysis tools to one such procedure, namely, ultrasound-guided axillary brachial plexus blockade, with the intention that the results may form a basis to enhance training and performance of the procedure.

  3. Joint Impact of Frequency Synchronization Errors and Intermodulation Distortion on the Performance of Multicarrier DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Rugini Luca

    2005-01-01

    Full Text Available The performance of multicarrier systems is highly impaired by intercarrier interference (ICI due to frequency synchronization errors at the receiver and by intermodulation distortion (IMD introduced by a nonlinear amplifier (NLA at the transmitter. In this paper, we evaluate the bit-error rate (BER of multicarrier direct-sequence code-division multiple-access (MC-DS-CDMA downlink systems subject to these impairments in frequency-selective Rayleigh fading channels, assuming quadrature amplitude modulation (QAM. The analytical findings allow to establish the sensitivity of MC-DS-CDMA systems to carrier frequency offset (CFO and NLA distortions, to identify the maximum CFO that is tolerable at the receiver side in different scenarios, and to find out the optimum value of the NLA output power backoff for a given CFO. Simulation results show that the approximated analysis is quite accurate in several conditions.

  4. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    Science.gov (United States)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  5. Analysis of Wind Speed Forecasting Error Effects on Automatic Generation Control Performance

    Directory of Open Access Journals (Sweden)

    H. Rajabi Mashhadi

    2014-09-01

    Full Text Available The main goal of this paper is to study statistical indices and evaluate AGC indices in power system which has large penetration of the WTGs. Increasing penetration of wind turbine generations, needs to study more about impacts of it on power system frequency control. Frequency control is changed with unbalancing real-time system generation and load . Also wind turbine generations have more fluctuations and make system more unbalance. Then AGC loop helps to adjust system frequency and the scheduled tie-line powers. The quality of AGC loop is measured by some indices. A good index is a proper measure shows the AGC performance just as the power system operates. One of well-known measures in literature which was introduced by NERC is Control Performance Standards(CPS. Previously it is claimed that a key factor in CPS index is related to standard deviation of generation error, installed power and frequency response. This paper focuses on impact of a several hours-ahead wind speed forecast error on this factor. Furthermore evaluation of conventional control performances in the power systems with large-scale wind turbine penetration is studied. Effects of wind speed standard deviation and also degree of wind farm penetration are analyzed and importance of mentioned factor are criticized. In addition, influence of mean wind speed forecast error on this factor is investigated. The study system is a two area system which there is significant wind farm in one of those. The results show that mean wind speed forecast error has considerable effect on AGC performance while the mentioned key factor is insensitive to this mean error.

  6. Evaluation of 14 patients performed radiotherapy due to Kaposi sarcoma

    Directory of Open Access Journals (Sweden)

    Fatma Teke

    2015-09-01

    Full Text Available Methods: The patients undergoing radiotherapy (RT because of the KS between the years 2005-2012 in Radiation Oncology Department of Dicle University Hospital were included. All patients underwent RT with different dose-fractionation schemes to increase quality of life and to palliate the symptoms. Patients with lesions in multiple regions underwent RT in the same or different dates. Responses to radiotherapy were recorded as complete or partial response. Results: Fourteen patients received radiotherapy because of f KS were evaluated retrospectively. Twenty two different regions of 14 patients underwent RT . Only one patient (4.5% was performed RT to glans penis as a third region while performed to the two regions in six patients (27.3%. At irradiations, 6 MV and 10 MV photon energies with 6 MeV, 9 MeV and 12 MeV electron energy were used. Water phantom or bolus material was used to obtain a homogeneous dose distribution in the photon irradiation. RT dose administered to a total of 22 different regional was median 800 cGy (Range: 800-3000 cGy. Median number of RT fractions was 1 (Range: 1-10. When treatment response were evaluated stable disease was present in the 4 (18.1% regions. Partial response was achieved in eight (36.4% regions, complete response in 10 (45.5%. RT-related common lymphedema in the feet and legs was observed in the four (57.3% regions in the acute period. Complication of pain was present in two (28.7% regions. Conclusion: RT is an appropriate and effective treatment regimen in the palliative treatment of KS lesions. Excellent response rates of skin lesions may be obtained by RT. Lesions and symptoms such as itching may be lost after RT. Side effects such as edema and pain may be relieved by supportive treatment.

  7. Impact of monetary incentives on cognitive performance and error monitoring following sleep deprivation.

    Science.gov (United States)

    Hsieh, Shulan; Li, Tzu-Hsien; Tsai, Ling-Ling

    2010-04-01

    To examine whether monetary incentives attenuate the negative effects of sleep deprivation on cognitive performance in a flanker task that requires higher-level cognitive-control processes, including error monitoring. Twenty-four healthy adults aged 18 to 23 years were randomly divided into 2 subject groups: one received and the other did not receive monetary incentives for performance accuracy. Both subject groups performed a flanker task and underwent electroencephalographic recordings for event-related brain potentials after normal sleep and after 1 night of total sleep deprivation in a within-subject, counterbalanced, repeated-measures study design. Monetary incentives significantly enhanced the response accuracy and reaction time variability under both normal sleep and sleep-deprived conditions, and they reduced the effects of sleep deprivation on the subjective effort level, the amplitude of the error-related negativity (an error-related event-related potential component), and the latency of the P300 (an event-related potential variable related to attention processes). However, monetary incentives could not attenuate the effects of sleep deprivation on any measures of behavior performance, such as the response accuracy, reaction time variability, or posterror accuracy adjustments; nor could they reduce the effects of sleep deprivation on the amplitude of the Pe, another error-related event-related potential component. This study shows that motivation incentives selectively reduce the effects of total sleep deprivation on some brain activities, but they cannot attenuate the effects of sleep deprivation on performance decrements in tasks that require high-level cognitive-control processes. Thus, monetary incentives and sleep deprivation may act through both common and different mechanisms to affect cognitive performance.

  8. Residents' surgical performance during the laboratory years: an analysis of rule-based errors.

    Science.gov (United States)

    Nathwani, Jay N; Wise, Brett J; Garren, Margaret E; Mohamadipanah, Hossein; Van Beek, Nicole; DiMarco, Shannon M; Pugh, Carla M

    2017-11-01

    Nearly one-third of surgical residents will enter into academic development during their surgical residency by dedicating time to a research fellowship for 1-3 y. Major interest lies in understanding how laboratory residents' surgical skills are affected by minimal clinical exposure during academic development. A widely held concern is that the time away from clinical exposure results in surgical skills decay. This study examines the impact of the academic development years on residents' operative performance. We hypothesize that the use of repeated, annual assessments may result in learning even without individual feedback on participants simulated performance. Surgical performance data were collected from laboratory residents (postgraduate years 2-5) during the summers of 2014, 2015, and 2016. Residents had 15 min to complete a shortened, simulated laparoscopic ventral hernia repair procedure. Final hernia repair skins from all participants were scored using a previously validated checklist. An analysis of variance test compared the mean performance scores of repeat participants to those of first time participants. Twenty-seven (37% female) laboratory residents provided 2-year assessment data over the 3-year span of the study. Second time performance revealed improvement from a mean score of 14 (standard error = 1.0) in the first year to 17.2 (SD = 0.9) in the second year, (F[1, 52] = 5.6, P = 0.022). Detailed analysis demonstrated improvement in performance for 3 grading criteria that were considered to be rule-based errors. There was no improvement in operative strategy errors. Analysis of longitudinal performance of laboratory residents shows higher scores for repeat participants in the category of rule-based errors. These findings suggest that laboratory residents can learn from rule-based mistakes when provided with annual performance-based assessments. This benefit was not seen with operative strategy errors and has important implications for

  9. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  10. Comparison of ETF´s performance related to the tracking error

    Directory of Open Access Journals (Sweden)

    Michaela Dorocáková

    2017-12-01

    Full Text Available With the development of financial markets, there is also immediate expansion of fund industry, which is a representative issue of collective investment. The purpose of index funds is to replicate returns and risk of underling index to the largest possible extent, with tracking error being one of the most monitored performance indicator of these passively managed funds. The aim of this paper is to describe several perspectives concerning indexing, index funds and exchange-traded funds, to explain the issue of tracking error with its examination and subsequent comparison of such funds provided by leading investment management companies with regard to different methods used for its evaluation. Our research shows that the decisive factor for occurrence of copy deviation is fund size and fund´s stock consolidation. In addition, performance differences between exchange-traded fund and its benchmark tend to show the signs of seasonality in the sense of increasing in the last months of a year.

  11. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2011-06-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. © 2011 IEEE.

  12. Particle-induced bit errors in high performance fiber optic data links for satellite data management

    International Nuclear Information System (INIS)

    Marshall, P.W.; Carts, M.A.; Dale, C.J.; LaBel, K.A.

    1994-01-01

    Experimental test methods and analysis tools are demonstrated to assess particle-induced bit errors on fiber optic link receivers for satellites. Susceptibility to direct ionization from low LET particles is quantified by analyzing proton and helium ion data as a function of particle LET. Existing single event analysis approaches are shown to apply, with appropriate modifications, to the regime of temporally (rather than spatially) distributed bits, even though the sensitivity to single events exceeds conventional memory technologies by orders of magnitude. The cross-section LET dependence follows a Weibull distribution at data rates from 200 to 1,000 Mbps and at various incident optical power levels. The LET threshold for errors is shown, through both experiment and modeling, to be 0 in all cases. The error cross-section exhibits a strong inverse dependence on received optical power in the LET range where most orbital single events would occur, thus indicating that errors can be minimized by operating links with higher incident optical power. Also, an analytic model is described which incorporates the appropriate physical characteristics of the link as well as the optical and receiver electrical characteristics. Results indicate appropriate steps to assure suitable link performance even in severe particle orbits

  13. Optics for five-dimensional measurement for correction of vertical displacement error due to attitude of floating body in superconducting magnetic levitation system

    International Nuclear Information System (INIS)

    Shiota, Fuyuhiko; Morokuma, Tadashi

    2006-01-01

    An improved optical system for five-dimensional measurement has been developed for the correction of vertical displacement error due to the attitude change of a superconducting floating body that shows five degrees of freedom besides a vertical displacement of 10 mm. The available solid angle for the optical measurement is extremely limited because of the cryogenic laser interferometer sharing the optical window of a vacuum chamber in addition to the basic structure of the cryogenic vessel for liquid helium. The aim of the design was to develop a more practical as well as better optical system compared with the prototype system. Various artifices were built into this optical system and the result shows a satisfactory performance and easy operation overcoming the extremely severe spatial difficulty in the levitation system. Although the system described here is specifically designed for our magnetic levitation system, the concept and each artifice will be applicable to the optical measurement system for an object in a high-vacuum chamber and/or cryogenic vessel where the available solid angle for an optical path is extremely limited

  14. Local Use-Dependent Sleep in Wakefulness Links Performance Errors to Learning.

    Science.gov (United States)

    Quercia, Angelica; Zappasodi, Filippo; Committeri, Giorgia; Ferrara, Michele

    2018-01-01

    Sleep and wakefulness are no longer to be considered as discrete states. During wakefulness brain regions can enter a sleep-like state (off-periods) in response to a prolonged period of activity (local use-dependent sleep). Similarly, during nonREM sleep the slow-wave activity, the hallmark of sleep plasticity, increases locally in brain regions previously involved in a learning task. Recent studies have demonstrated that behavioral performance may be impaired by off-periods in wake in task-related regions. However, the relation between off-periods in wake, related performance errors and learning is still untested in humans. Here, by employing high density electroencephalographic (hd-EEG) recordings, we investigated local use-dependent sleep in wake, asking participants to repeat continuously two intensive spatial navigation tasks. Critically, one task relied on previous map learning (Wayfinding) while the other did not (Control). Behaviorally awake participants, who were not sleep deprived, showed progressive increments of delta activity only during the learning-based spatial navigation task. As shown by source localization, delta activity was mainly localized in the left parietal and bilateral frontal cortices, all regions known to be engaged in spatial navigation tasks. Moreover, during the Wayfinding task, these increments of delta power were specifically associated with errors, whose probability of occurrence was significantly higher compared to the Control task. Unlike the Wayfinding task, during the Control task neither delta activity nor the number of errors increased progressively. Furthermore, during the Wayfinding task, both the number and the amplitude of individual delta waves, as indexes of neuronal silence in wake (off-periods), were significantly higher during errors than hits. Finally, a path analysis linked the use of the spatial navigation circuits undergone to learning plasticity to off periods in wake. In conclusion, local sleep regulation in

  15. Evaluating physician performance at individualizing care: a pilot study tracking contextual errors in medical decision making.

    Science.gov (United States)

    Weiner, Saul J; Schwartz, Alan; Yudkowsky, Rachel; Schiff, Gordon D; Weaver, Frances M; Goldberg, Julie; Weiss, Kevin B

    2007-01-01

    Clinical decision making requires 2 distinct cognitive skills: the ability to classify patients' conditions into diagnostic and management categories that permit the application of research evidence and the ability to individualize or-more specifically-to contextualize care for patients whose circumstances and needs require variation from the standard approach to care. The purpose of this study was to develop and test a methodology for measuring physicians' performance at contextualizing care and compare it to their performance at planning biomedically appropriate care. First, the authors drafted 3 cases, each with 4 variations, 3 of which are embedded with biomedical and/or contextual information that is essential to planning care. Once the cases were validated as instruments for assessing physician performance, 54 internal medicine residents were then presented with opportunities to make these preidentified biomedical or contextual errors, and data were collected on information elicitation and error making. The case validation process was successful in that, in the final iteration, the physicians who received the contextual variant of cases proposed an alternate plan of care to those who received the baseline variant 100% of the time. The subsequent piloting of these validated cases unmasked previously unmeasured differences in physician performance at contextualizing care. The findings, which reflect the performance characteristics of the study population, are presented. This pilot study demonstrates a methodology for measuring physician performance at contextualizing care and illustrates the contribution of such information to an overall assessment of physician practice.

  16. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  17. Symbol error rate performance evaluation of the LM37 multimegabit telemetry modulator-demodulator unit

    Science.gov (United States)

    Malek, H.

    1981-01-01

    The LM37 multimegabit telemetry modulator-demodulator unit was tested for evaluation of its symbol error rate (SER) performance. Using an automated test setup, the SER tests were carried out at various symbol rates and signal-to-noise ratios (SNR), ranging from +10 to -10 dB. With the aid of a specially designed error detector and a stabilized signal and noise summation unit, measurement of the SER at low SNR was possible. The results of the tests show that at symbol rates below 20 megasymbols per second (MS)s) and input SNR above -6 dB, the SER performance of the modem is within the specified 0.65 to 1.5 dB of the theoretical error curve. At symbol rates above 20 MS/s, the specification is met at SNR's down to -2 dB. The results of the SER tests are presented with the description of the test setup and the measurement procedure.

  18. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.

    2012-12-06

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate the system performance under very realistic Nakagami-m fading and additive white Gaussian noise channel. On the other hand, the accuracy of the obtained results is verified through running the simulation under a good confidence interval reliability of 95 %. We see that as the number of simulation runs N increases, the simulated error rate becomes closer to the actual one and the confidence interval difference reduces. Hence our results are expected to be of significant practical use for such scenarios. © 2012 Springer Science+Business Media New York.

  19. On the Performance of Multihop Heterodyne FSO Systems With Pointing Errors

    KAUST Repository

    Zedini, Emna

    2015-03-30

    This paper reports the end-to-end performance analysis of a multihop free-space optical system with amplify-and-forward (AF) channel-state-information (CSI)-assisted or fixed-gain relays using heterodyne detection over Gamma–Gamma turbulence fading with pointing error impairments. In particular, we derive new closed-form results for the average bit error rate (BER) of a variety of binary modulation schemes and the ergodic capacity in terms of the Meijer\\'s G function. We then offer new accurate asymptotic results for the average BER and the ergodic capacity at high SNR values in terms of simple elementary functions. For the capacity, novel asymptotic results at low and high average SNR regimes are also obtained via an alternative moments-based approach. All analytical results are verified via computer-based Monte-Carlo simulations.

  20. Error Control Techniques for Efficient Multicast Streaming in UMTS Networks: Proposals andPerformance Evaluation

    Directory of Open Access Journals (Sweden)

    Michele Rossi

    2004-06-01

    Full Text Available In this paper we introduce techniques for efficient multicast video streaming in UMTS networks where a video content has to be conveyed to multiple users in the same cell. Efficient multicast data delivery in UMTS is still an open issue. In particular, suitable solutions have to be found to cope with wireless channel errors, while maintaining both an acceptable channel utilization and a controlled delivery delay over the wireless link between the serving base station and the mobile terminals. Here, we first highlight that standard solutions such as unequal error protection (UEP of the video flow are ineffective in the UMTS systems due to its inherent large feedback delay at the link layer (Radio Link Control, RLC. Subsequently, we propose a local approach to solve errors directly at the UMTS link layer while keeping a reasonably high channel efficiency and saving, as much as possible, system resources. The solution that we propose in this paper is based on the usage of the common channel to serve all the interested users in a cell. In this way, we can save resources with respect to the case where multiple dedicated channels are allocated for every user. In addition to that, we present a hybrid ARQ (HARQ proactive protocol that, at the cost of some redundancy (added to the link layer flow, is able to consistently improve the channel efficiency with respect to the plain ARQ case, by therefore making the use of a single common channel for multicast data delivery feasible. In the last part of the paper we give some hints for future research, by envisioning the usage of the aforementioned error control protocols with suitably encoded video streams.

  1. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  2. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    International Nuclear Information System (INIS)

    Pan, Zhao; Thomson, Scott; Whitehead, Jared; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. (paper)

  3. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type. PMID:27499587

  4. Visual Performance on the Small Letter Contrast Test: Effects of Aging, Low Luminance and Refractive Error

    Science.gov (United States)

    2000-08-01

    luminance performance and aviation, many aviators develop ametropias refractive error having comparable effects on during their careers. We were... statistically (0.04 logMAR, the non-aviator group. Separate investigators at p=0.01), but not clinically significant (ə/2 line different research facilities... statistically significant (0.11 ± 0.1 logCS, t=4.0, sensitivity on the SLCT decreased for the aviator pɘ.001), yet there is significant overlap group at a

  5. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  6. On the Optimal Detection and Error Performance Analysis of the Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2018-01-01

    The conventional minimum Euclidean distance (MED) receiver design is based on the assumption of ideal hardware transceivers and proper Gaussian noise in communication systems. Throughout this study, an accurate statistical model of various hardware impairments (HWIs) is presented. Then, an optimal maximum likelihood (ML) receiver is derived considering the distinct characteristics of the HWIs comprised of additive improper Gaussian noise and signal distortion. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds are derived. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver and the tightness of the derived bounds.

  7. On the Optimal Detection and Error Performance Analysis of the Hardware Impaired Systems

    KAUST Repository

    Javed, Sidrah

    2018-01-15

    The conventional minimum Euclidean distance (MED) receiver design is based on the assumption of ideal hardware transceivers and proper Gaussian noise in communication systems. Throughout this study, an accurate statistical model of various hardware impairments (HWIs) is presented. Then, an optimal maximum likelihood (ML) receiver is derived considering the distinct characteristics of the HWIs comprised of additive improper Gaussian noise and signal distortion. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds are derived. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver and the tightness of the derived bounds.

  8. Tracking error constrained robust adaptive neural prescribed performance control for flexible hypersonic flight vehicle

    Directory of Open Access Journals (Sweden)

    Zhonghua Wu

    2017-02-01

    Full Text Available A robust adaptive neural control scheme based on a back-stepping technique is developed for the longitudinal dynamics of a flexible hypersonic flight vehicle, which is able to ensure the state tracking error being confined in the prescribed bounds, in spite of the existing model uncertainties and actuator constraints. Minimal learning parameter technique–based neural networks are used to estimate the model uncertainties; thus, the amount of online updated parameters is largely lessened, and the prior information of the aerodynamic parameters is dispensable. With the utilization of an assistant compensation system, the problem of actuator constraint is overcome. By combining the prescribed performance function and sliding mode differentiator into the neural back-stepping control design procedure, a composite state tracking error constrained adaptive neural control approach is presented, and a new type of adaptive law is constructed. As compared with other adaptive neural control designs for hypersonic flight vehicle, the proposed composite control scheme exhibits not only low-computation property but also strong robustness. Finally, two comparative simulations are performed to demonstrate the robustness of this neural prescribed performance controller.

  9. Human error analysis project (HEAP) - The fourth pilot study: verbal data for analysis of operator performance

    International Nuclear Information System (INIS)

    Braarud, Per Oeyvind; Droeyvoldsmo, Asgeir; Hollnagel, Erik

    1997-06-01

    This report is the second report from the Pilot study No. 4 within the Human Error Analyses Project (HEAP). The overall objective of HEAP is to provide a better understanding and explicit modelling of how and why ''cognitive errors'' occur. This study investigated the contribution from different verbal data sources for analysis of control room operator's performance. Operator's concurrent verbal report, retrospective verbal report, and process expert's comments were compared for their contribution to an operator performance measure. This study looked into verbal protocols for single operator and for team. The main findings of the study were that all the three verbal data sources could be used to study performance. There was a relative high overlap between the data sources, but also a unique contribution from each source. There was a common pattern in the types of operator activities the data sources gave information about. The operator's concurrent protocol overall contained slightly more information on the operator's activities than the other two verbal sources. The study also showed that concurrent verbal protocol is feasible and useful for analysis of team's activities during a scenario. (author)

  10. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  11. Improved theory of forced magnetic reconnection due to error field and its application to seed island formation for NTM

    International Nuclear Information System (INIS)

    Ishizawa, A.; Tokuda, S.; Wakatani, M.

    2001-01-01

    A seed island is required for destabilizing the neo-classical tearing mode (NTM), which degrades confinement in long sustained, high-confinement, high beta plasmas. The seed island formation due to an MHD event, such as a sawtooth crash, is investigated by applying the improved boundary layer theory of forced magnetic reconnection. This improved theory introduces the non-constant-ψ matching and reveals the complicated feature of the reconnection described by two reconnected fluxes. In the initial evolution, these reconnected fluxes grow on the time scale including the ideal time scale, typical time scale of the MHD event and the time scale of resistive kink mode. The surface current is negative, Δ' (t) A S 1/3 . (author)

  12. Securing Relay Networks with Artificial Noise: An Error Performance-Based Approach

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2017-07-01

    Full Text Available We apply the concept of artificial and controlled interference in a two-hop relay network with an untrusted relay, aiming at enhancing the wireless communication secrecy between the source and the destination node. In order to shield the square quadrature amplitude-modulated (QAM signals transmitted from the source node to the relay, the destination node designs and transmits artificial noise (AN symbols to jam the relay reception. The objective of our considered AN design is to degrade the error probability performance at the untrusted relay, for different types of channel state information (CSI at the destination. By considering perfect knowledge of the instantaneous CSI of the source-to-relay and relay-to-destination links, we first present an analytical expression for the symbol error rate (SER performance at the relay. Based on the assumption of an average power constraint at the destination node, we then derive the optimal phase and power distribution of the AN that maximizes the SER at the relay. Furthermore, we obtain the optimal AN design for the case where only statistical CSI is available at the destination node. For both cases, our study reveals that the Gaussian distribution is generally not optimal to generate AN symbols. The presented AN design takes into account practical parameters for the communication links, such as QAM signaling and maximum likelihood decoding.

  13. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2010-01-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  14. Error-rate performance analysis of incremental decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-10-01

    In this paper, we investigate an incremental opportunistic relaying scheme where the selected relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider regenerative relaying in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive a closed-form expression for the end-to-end biterror rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact probability density function (PDF) of each hop. Furthermore, we evaluate the asymptotic error performance and the diversity order is deduced. We show that performance simulation results coincide with our analytical results. ©2010 IEEE.

  15. Performance analysis for the bit-error rate of SAC-OCDMA systems

    Science.gov (United States)

    Feng, Gang; Cheng, Wenqing; Chen, Fujun

    2015-09-01

    Under low power, Gaussian statistics by invoking the central limit theorem is feasible to predict the upper bound in the spectral-amplitude-coding optical code division multiple access (SAC-OCDMA) system. However, this case severely underestimates the bit-error rate (BER) performance of the system under high power assumption. Fortunately, the exact negative binomial (NB) model is a perfect replacement for the Gaussian model in the prediction and evaluation. Based on NB statistics, a more accurate closed-form expression is analyzed and derived for the SAC-OCDMA system. The experiment shows that the obtained expression provides a more precise prediction of the BER performance under the low and high power assumptions.

  16. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  17. The spatial accuracy of geographic ecological momentary assessment (GEMA): Error and bias due to subject and environmental characteristics.

    Science.gov (United States)

    Mennis, Jeremy; Mason, Michael; Ambrus, Andreea; Way, Thomas; Henry, Kevin

    2017-09-01

    Geographic ecological momentary assessment (GEMA) combines ecological momentary assessment (EMA) with global positioning systems (GPS) and geographic information systems (GIS). This study evaluates the spatial accuracy of GEMA location data and bias due to subject and environmental data characteristics. Using data for 72 subjects enrolled in a study of urban adolescent substance use, we compared the GPS-based location of EMA responses in which the subject indicated they were at home to the geocoded home address. We calculated the percentage of EMA locations within a sixteenth, eighth, quarter, and half miles from the home, and the percentage within the same tract and block group as the home. We investigated if the accuracy measures were associated with subject demographics, substance use, and emotional dysregulation, as well as environmental characteristics of the home neighborhood. Half of all subjects had more than 88% of their EMA locations within a half mile, 72% within a quarter mile, 55% within an eighth mile, 50% within a sixteenth of a mile, 83% in the correct tract, and 71% in the correct block group. There were no significant associations with subject or environmental characteristics. Results support the use of GEMA for analyzing subjects' exposures to urban environments. Researchers should be aware of the issue of spatial accuracy inherent in GEMA, and interpret results accordingly. Understanding spatial accuracy is particularly relevant for the development of 'ecological momentary interventions' (EMI), which may depend on accurate location information, though issues of privacy protection remain a concern. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Cognitive flexibility: A distinct element of performance impairment due to sleep deprivation.

    Science.gov (United States)

    Honn, K A; Hinson, J M; Whitney, P; Van Dongen, H P A

    2018-03-14

    In around-the-clock operations, reduced alertness due to circadian misalignment and sleep loss causes performance impairment, which can lead to catastrophic errors and accidents. There is mounting evidence that performance on different tasks is differentially affected, but the general principles underlying this differentiation are not well understood. One factor that may be particularly relevant is the degree to which tasks require executive control, that is, control over the initiation, monitoring, and termination of actions in order to achieve goals. A key aspect of this is cognitive flexibility, i.e., the deployment of cognitive control resources to adapt to changes in events. Loss of cognitive flexibility due to sleep deprivation has been attributed to "feedback blunting," meaning that feedback on behavioral outcomes has reduced salience - and that feedback is therefore less effective at driving behavior modification under changing circumstances. The cognitive mechanisms underlying feedback blunting are as yet unknown. Here we present data from an experiment that investigated the effects of sleep deprivation on performance after an unexpected reversal of stimulus-response mappings, requiring cognitive flexibility to maintain good performance. Nineteen healthy young adults completed a 4-day in-laboratory study. Subjects were randomized to either a total sleep deprivation condition (n = 11) or a control condition (n = 8). Athree-phase reversal learning decision task was administered at baseline, and again after 30.5 h of sleep deprivation, or matching well-rested control. The task was based on a go/no go task paradigm, in which stimuli were assigned to either a go (response) set or a no go (no response) set. Each phase of the task included four stimuli (two in the go set and two in the no go set). After each stimulus presentation, subjects could make a response within 750 ms or withhold their response. They were then shown feedback on the accuracy of

  19. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    Science.gov (United States)

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. System-theoretic analysis of due-time performance in production systems

    OpenAIRE

    Jacobs David; Meerkov Semyon M.

    1995-01-01

    Along with the average production rate, the due-time performance is an important characteristic of manufacturing systems. Unlike the production rate, the due-time performance has received relatively little attention in the literature, especially in the context of large volume production. This paper is devoted to this topic. Specifically, the notion of due-time performance is formalized as the probability that the number of parts produced during the shipping period reaches the required shipme...

  1. Pediatric echocardiograms performed at primary centers: Diagnostic errors and missing links!

    International Nuclear Information System (INIS)

    Saraf, Rahul P; Suresh, PV; Maheshwari, Sunita; Shah, Sejal S

    2015-01-01

    The present study was undertaken to assess the accuracy of pediatric echocardiograms done at non-tertiary centers and to evaluate the relationship of inaccurate interpretations with age, echocardiogram performer and complexity of congenital heart disease (CHD). The echocardiogram reports of 182 consecutive children with CHD (5 days-16 years) who were evaluated at a non-tertiary center and subsequently referred to our center were reviewed. Age of the child at echocardiogram, echocardiogram performer and complexity of CHD were noted. These reports were compared with echocardiogram done at our center. Discrepancies were noted and categorized. To assess our own error rate, we compared our echocardiogram reports with the findings obtained during surgery (n = 172), CT scan (n = 9) or cardiac catheterization reports (n = 1). Most of the children at the non-tertiary center (92%) underwent echocardiogram by personnel other than a pediatric cardiologist. Overall, diagnostic errors were found in 69/182 (38%) children. Moderate and major discrepancies affecting the final management were found in 42/182 (23%) children. Discrepancies were higher when the echocardiogram was done by personnel other than pediatric cardiologist (P < 0.01) and with moderate and high complexity lesions (P = 0.0001). There was no significant difference in proportion of these discrepancies in children ≤ 1 year vs. >1 year of age. A significant number of pediatric echocardiograms done at non-tertiary centers had discrepancies that affected the management of these children. More discrepancies were seen when the echocardiogram performer was not a pediatric cardiologist and with complex CHD

  2. On the performance of mixed RF/FSO variable gain dual-hop transmission systems with pointing errors

    KAUST Repository

    Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2013-01-01

    In this work, the performance analysis of a dualhop relay transmission system composed of asymmetric radiofrequency (RF) and unified free-space optical (FSO) links subject to pointing errors is presented. These unified FSO links account for both

  3. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    Science.gov (United States)

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  4. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  5. Risk of Rare Disasters, Euler Equation Errors and the Performance of the C-CAPM

    DEFF Research Database (Denmark)

    Posch, Olaf; Schrimpf, Andreas

    pricing errors in the C-CAPM. We also show (analytically and in a Monte Carlo study) that implausible estimates of risk aversion and time preference are not puzzling in this framework and emerge as a result of rational pricing errors. While this bias essentially removes the pricing error...

  6. Analysis of the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery

    Directory of Open Access Journals (Sweden)

    Samuel Arba-Mosquera

    2012-01-01

    Conclusions: The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.

  7. System-theoretic analysis of due-time performance in production systems

    Directory of Open Access Journals (Sweden)

    David Jacobs

    1995-01-01

    Full Text Available Along with the average production rate, the due-time performance is an important characteristic of manufacturing systems. Unlike the production rate, the due-time performance has received relatively little attention in the literature, especially in the context of large volume production. This paper is devoted to this topic. Specifically, the notion of due-time performance is formalized as the probability that the number of parts produced during the shipping period reaches the required shipment size. This performance index is analyzed for both lean and mass manufacturing environments. In particular, it is shown that, to achieve a high due-time performance in a lean environment, the production system should be scheduled for a sufficiently small fraction of its average production rate. In mass production, due-time performance arbitrarily close to one can be achieved for any scheduling practice, up to the average production rate.

  8. Development and performance evaluation of a prototype system for prediction of the group error at the maintenance work

    International Nuclear Information System (INIS)

    Yoshino, Kenji; Hirotsu, Yuko

    2000-01-01

    In order to attain zero-izing of much more error rather than it can set to a nuclear power plant, Authors development and its system-izing of the error prediction causal model which predicts group error action at the time of maintenance work were performed. This prototype system has the following feature. (1) When a user inputs the existence and the grade of the existence of the 'feature factor of the maintenance work' as a prediction object, 'an organization and an organization factor', and a 'group PSF (Performance Shaping Factor) factor' into this system. The maintenance group error to target can be predicted through the prediction model which consists of a class of seven stages. (2) This system by utilizing the information on a prediction result database, it can use not only for prediction of a maintenance group error but for various safe activity, such as KYT (dangerous forecast training) and TBM (Tool Box Meeting). (3) This system predicts a cooperation error' at highest rate, and, subsequently predicts the detection error' at a high rate. And to the 'decision-making error', the transfer error' and the 'state cognitive error', it has the characteristic predicted at almost same rate. (4) If it has full knowledge even of the features, such as the enforcement conditions of maintenance work, and organization, even if the user has neither the knowledge about a human factor, nor experience, anyone of this system is slight about the existence, its extent, etc. of generating of a maintenance group error made difficult from the former logically and systematically easily, it can predict in business time for about 15 minutes. (author)

  9. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob; Uysal, Murat; Tsiftsis, Theodoros A.

    2014-01-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  10. Performance Ratios of Grid Connected Photovoltaic Systems and Theory of Errors

    Directory of Open Access Journals (Sweden)

    Javier Vilariño-García

    2016-07-01

    Full Text Available A detailed analysis of the different levels of dynamic performance of grid connected photovoltaic systems and its interface based on the development of a block diagram explaining the course of energy transformation from solar radiation incident on the solar modules until it becomes useful energy available in the mains. Indexes defined by the Spanish standard                 UNE-EN 61724: Monitoring photovoltaic systems: Guidelines for measurement, data exchange and analysis, are explained from the basics fundaments of block algebra and the transfer function of linear systems. The accuracy requirements demanded by the aforementioned standard for measuring these parameters are discussed in the theory of errors and the real limits of the results obtained. 

  11. Error-rate performance analysis of cooperative OFDMA system with decode-and-forward relaying

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-06-01

    In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (LSkD + 1) + σ M m = 1 min(LSkRm + 1, LR mD + 1) is available, where M is the number of relays, and LS kD + 1, LSkRm + 1, and LRmD + 1 are the lengths of channel impulse responses of source-to-destination, source-to- mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings. © 2013 IEEE.

  12. Study on relationship of performance shaping factor in human error probability with prevalent stress of PUSPATI TRIGA reactor operators

    Science.gov (United States)

    Rahim, Ahmad Nabil Bin Ab; Mohamed, Faizal; Farid, Mohd Fairus Abdul; Fazli Zakaria, Mohd; Sangau Ligam, Alfred; Ramli, Nurhayati Binti

    2018-01-01

    Human factor can be affected by prevalence stress measured using Depression, Anxiety and Stress Scale (DASS). From the respondents feedback can be summarized that the main factor causes the highest prevalence stress is due to the working conditions that require operators to handle critical situation and make a prompt critical decisions. The relationship between the prevalence stress and performance shaping factors found that PSFFitness and PSFWork Process showed positive Pearson’s Correlation with the score of .763 and .826 while the level of significance, p = .028 and p = .012. These positive correlations with good significant values between prevalence stress and human performance shaping factor (PSF) related to fitness, work processes and procedures. The higher the stress level of the respondents, the higher the score of selected for the PSFs. This is due to the higher levels of stress lead to deteriorating physical health and cognitive also worsened. In addition, the lack of understanding in the work procedures can also be a factor that causes a growing stress. The higher these values will lead to the higher the probabilities of human error occur. Thus, monitoring the level of stress among operators RTP is important to ensure the safety of RTP.

  13. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2015-01-01

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  14. Spectral and Wavefront Error Performance of WFIRST/AFTA Prototype Filters

    Science.gov (United States)

    Quijada, Manuel; Seide, Laurie; Marx, Cathy; Pasquale, Bert; McMann, Joseph; Hagopian, John; Dominguez, Margaret; Gong, Qian; Morey, Peter

    2016-01-01

    The Cycle 5 design baseline for the Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Assets (WFIRSTAFTA) instrument includes a single wide-field channel (WFC) instrument for both imaging and slit-less spectroscopy. The only routinely moving part during scientific observations for this wide-field channel is the element wheel (EW) assembly. This filter-wheel assembly will have 8 positions that will be populated with 6 bandpass filters, a blank position, and a Grism that will consist of a three-element assembly to disperse the full field with an undeviated central wavelength for galaxy redshift surveys. All filter elements in the EW assembly will be made out of fused silica substrates (110 mm diameter) that will have the appropriate bandpass coatings according to the filter designations (Z087, Y106, J129, H158, F184, W149 and Grism). This paper presents and discusses the performance (including spectral transmission and reflectedtransmitted wavefront error measurements) of a subset of bandpass filter coating prototypes that are based on the WFC instrument filter compliment. The bandpass coating prototypes that are tested in this effort correspond to the Z087, W149, and Grism filter elements. These filter coatings have been procured from three different vendors to assess the most challenging aspects in terms of the in-band throughput, out of band rejection (including the cut-on and cutoff slopes), and the impact the wavefront error distortions of these filter coatings will have on the imaging performance of the de-field channel in the WFIRSTAFTA observatory.

  15. Spectral and Wavefront Error Performance of WFIRST-AFTA Bandpass Filter Coating Prototypes

    Science.gov (United States)

    Quijada, Manuel A.; Seide, Laurie; Pasquale, Bert A.; McMann, Joseph C.; Hagopian, John G.; Dominguez, Margaret Z.; Gong, Quian; Marx, Catherine T.

    2016-01-01

    The Cycle 5 design baseline for the Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Assets (WFIRST/AFTA) instrument includes a single wide-field channel (WFC) instrument for both imaging and slit-less spectroscopy. The only routinely moving part during scientific observations for this wide-field channel is the element wheel (EW) assembly. This filter-wheel assembly will have 8 positions that will be populated with 6 bandpass filters, a blank position, and a Grism that will consist of a three-element assembly to disperse the full field with an undeviated central wavelength for galaxy redshift surveys. All filter elements in the EW assembly will be made out of fused silica substrates (110 mm diameter) that will have the appropriate bandpass coatings according to the filter designations (Z087, Y106, J129, H158, F184, W149 and Grism). This paper presents and discusses the performance (including spectral transmission and reflected/transmitted wavefront error measurements) of a subset of bandpass filter coating prototypes that are based on the WFC instrument filter compliment. The bandpass coating prototypes that are tested in this effort correspond to the Z087, W149, and Grism filter elements. These filter coatings have been procured from three different vendors to assess the most challenging aspects in terms of the in-band throughput, out of band rejection (including the cut-on and cutoff slopes), and the impact the wavefront error distortions of these filter coatings will have on the imaging performance of the wide-field channel in the WFIRST/AFTA observatory.

  16. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-08-12

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  17. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  18. Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?

    Directory of Open Access Journals (Sweden)

    Martine Coene

    2015-01-01

    Full Text Available This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient’s hearing impairment, to predict a patient’s gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination.

  19. Errors in the calculation of new salary positions and performance premiums – 2017 MERIT exercise

    CERN Multimedia

    Staff Association

    2017-01-01

    Following the receipt of the letters dated May 12th announcing the qualification of their performance (MERIT 2017), and the notification of their salary slips for the month of May, several colleagues have come to us to enquire about the calculation of salary increases and performance premiums. After verification, the Staff Association has informed the Management, in a meeting of the Standing Concertation Committee on June 1st, about errors owing to rounding in the applied formulas. James Purvis, Head of HR department, has published in the CERN Bulletin dated July 18th an article, under the heading “Better precision (rounding)”, that gives a short explanation of these rounding effects. But we want to further bring you more precise explanations. Advancement On the salary slips for the month of May, the calculations of the advancement and new salary positions were done, by the services of administrative computing in the FAP department, on the basis of the salary, rounded to the nearest franc...

  20. Apparently conclusive meta-analyses may be inconclusive--Trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses

    DEFF Research Database (Denmark)

    Brok, Jesper; Thorlund, Kristian; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due to repeti......BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due...

  1. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  2. How Helpful Are Error Management and Counterfactual Thinking Instructions to Inexperienced Spreadsheet Users' Training Task Performance?

    Science.gov (United States)

    Caputi, Peter; Chan, Amy; Jayasuriya, Rohan

    2011-01-01

    This paper examined the impact of training strategies on the types of errors that novice users make when learning a commonly used spreadsheet application. Fifty participants were assigned to a counterfactual thinking training (CFT) strategy, an error management training strategy, or a combination of both strategies, and completed an easy task…

  3. Computing and analyzing the sensitivity of MLP due to the errors of the i.i.d. inputs and weights based on CLT.

    Science.gov (United States)

    Yang, Sheng-Sung; Ho, Chia-Lu; Siu, Sammy

    2010-12-01

    In this paper, we propose an algorithm based on the central limit theorem to compute the sensitivity of the multilayer perceptron (MLP) due to the errors of the inputs and weights. For simplicity and practicality, all inputs and weights studied here are independently identically distributed (i.i.d.). The theoretical results derived from the proposed algorithm show that the sensitivity of the MLP is affected by the number of layers and the number of neurons adopted in each layer. To prove the reliability of the proposed algorithm, some experimental results of the sensitivity are also presented, and they match the theoretical ones. The good agreement between the theoretical results and the experimental results verifies the reliability and feasibility of the proposed algorithm. Furthermore, the proposed algorithm can also be applied to compute precisely the sensitivity of the MLP with any available activation functions and any types of i.i.d. inputs and weights.

  4. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    Directory of Open Access Journals (Sweden)

    Michael J Larson

    2013-07-01

    Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  5. Evaluating the Performance Diagnostic Checklist-Human Services to Assess Incorrect Error-Correction Procedures by Preschool Paraprofessionals

    Science.gov (United States)

    Bowe, Melissa; Sellers, Tyra P.

    2018-01-01

    The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…

  6. Effect of Pointing Error on the BER Performance of an Optical CDMA FSO Link with SIK Receiver

    Science.gov (United States)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2017-12-01

    An analytical approach is presented for an optical code division multiple access (OCDMA) system over free space optical (FSO) channel considering the effect of pointing error between the transmitter and the receiver. Analysis is carried out with an optical sequence inverse keying (SIK) correlator receiver with intensity modulation and direct detection (IM/DD) to find the bit error rate (BER) with pointing error. The results are evaluated numerically in terms of signal-to-noise plus multi-access interference (MAI) ratio, BER and power penalty due to pointing error. It is noticed that the OCDMA FSO system is highly affected by pointing error with significant power penalty at a BER of 10-6 and 10-9. For example, penalty at BER 10-9 is found to be 9 dB corresponding to normalized pointing error of 1.4 for 16 users with processing gain of 256 and is reduced to 6.9 dB when the processing gain is increased to 1,024.

  7. Potential Functional Embedding Theory at the Correlated Wave Function Level. 2. Error Sources and Performance Tests.

    Science.gov (United States)

    Cheng, Jin; Yu, Kuang; Libisch, Florian; Dieterich, Johannes M; Carter, Emily A

    2017-03-14

    Quantum mechanical embedding theories partition a complex system into multiple spatial regions that can use different electronic structure methods within each, to optimize trade-offs between accuracy and cost. The present work incorporates accurate but expensive correlated wave function (CW) methods for a subsystem containing the phenomenon or feature of greatest interest, while self-consistently capturing quantum effects of the surroundings using fast but less accurate density functional theory (DFT) approximations. We recently proposed two embedding methods [for a review, see: Acc. Chem. Res. 2014 , 47 , 2768 ]: density functional embedding theory (DFET) and potential functional embedding theory (PFET). DFET provides a fast but non-self-consistent density-based embedding scheme, whereas PFET offers a more rigorous theoretical framework to perform fully self-consistent, variational CW/DFT calculations [as defined in part 1, CW/DFT means subsystem 1(2) is treated with CW(DFT) methods]. When originally presented, PFET was only tested at the DFT/DFT level of theory as a proof of principle within a planewave (PW) basis. Part 1 of this two-part series demonstrated that PFET can be made to work well with mixed Gaussian type orbital (GTO)/PW bases, as long as optimized GTO bases and consistent electron-ion potentials are employed throughout. Here in part 2 we conduct the first PFET calculations at the CW/DFT level and compare them to DFET and full CW benchmarks. We test the performance of PFET at the CW/DFT level for a variety of types of interactions (hydrogen bonding, metallic, and ionic). By introducing an intermediate CW/DFT embedding scheme denoted DFET/PFET, we show how PFET remedies different types of errors in DFET, serving as a more robust type of embedding theory.

  8. Potential loss of revenue due to errors in clinical coding during the implementation of the Malaysia diagnosis related group (MY-DRG®) Casemix system in a teaching hospital in Malaysia.

    Science.gov (United States)

    Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed

    2018-01-25

    The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.

  9. SU-E-J-164: Estimation of DVH Variation for PTV Due to Interfraction Organ Motion in Prostate VMAT Using Gaussian Error Function

    International Nuclear Information System (INIS)

    Lewis, C; Jiang, R; Chow, J

    2015-01-01

    Purpose: We developed a method to predict the change of DVH for PTV due to interfraction organ motion in prostate VMAT without repeating the CT scan and treatment planning. The method is based on a pre-calculated patient database with DVH curves of PTV modelled by the Gaussian error function (GEF). Methods: For a group of 30 patients with different prostate sizes, their VMAT plans were recalculated by shifting their PTVs 1 cm with 10 increments in the anterior-posterior, left-right and superior-inferior directions. The DVH curve of PTV in each replan was then fitted by the GEF to determine parameters describing the shape of curve. Information of parameters, varying with the DVH change due to prostate motion for different prostate sizes, was analyzed and stored in a database of a program written by MATLAB. Results: To predict a new DVH for PTV due to prostate interfraction motion, prostate size and shift distance with direction were input to the program. Parameters modelling the DVH for PTV were determined based on the pre-calculated patient dataset. From the new parameters, DVH curves of PTVs with and without considering the prostate motion were plotted for comparison. The program was verified with different prostate cases involving interfraction prostate shifts and replans. Conclusion: Variation of DVH for PTV in prostate VMAT can be predicted using a pre-calculated patient database with DVH curve fitting. The computing time is fast because CT rescan and replan are not required. This quick DVH estimation can help radiation staff to determine if the changed PTV coverage due to prostate shift is tolerable in the treatment. However, it should be noted that the program can only consider prostate interfraction motions along three axes, and is restricted to prostate VMAT plan using the same plan script in the treatment planning system

  10. An Empirical Study on Human Performance according to the Physical Environment (Potential Human Error Hazard) in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, In Seok; Seong, Proong Hyun

    2014-01-01

    The management of the physical environment for safety is more effective than a nuclear industry. Despite the physical environment such as lighting, noise satisfy with management standards, it can be background factors may cause human error and affect human performance. Because the consequence of extremely human error and human performance is high according to the physical environment, requirement standard could be covered with specific criteria. Particularly, in order to avoid human errors caused by an extremely low or rapidly-changing intensity illumination and masking effect such as power disconnection, plans for better visual environment and better function performances should be made as a careful study on efficient ways to manage and continue the better conditions is conducted

  11. High-Performance Region-of-Interest Image Error Concealment with Hiding Technique

    Directory of Open Access Journals (Sweden)

    Shih-Chang Hsia

    2010-01-01

    Full Text Available Recently region-of-interest (ROI based image coding is a popular topic. Since ROI area contains much more important information for an image, it must be prevented from error decoding while suffering from channel lost or unexpected attack. This paper presents an efficient error concealment method to recover ROI information with a hiding technique. Based on the progressive transformation, the low-frequency components of ROI are encoded to disperse its information into the high-frequency bank of original image. The capability of protection is carried out with extracting the ROI coefficients from the damaged image without increasing extra information. Simulation results show that the proposed method can efficiently reconstruct the ROI image when ROI bit-stream occurs errors, and the measurement of PSNR result outperforms the conventional error concealment techniques by 2 to 5 dB.

  12. Confidence Intervals Verification for Simulated Error Rate Performance of Wireless Communication System

    KAUST Repository

    Smadi, Mahmoud A.; Ghaeb, Jasim A.; Jazzar, Saleh; Saraereh, Omar A.

    2012-01-01

    In this paper, we derived an efficient simulation method to evaluate the error rate of wireless communication system. Coherent binary phase-shift keying system is considered with imperfect channel phase recovery. The results presented demonstrate

  13. Suppression of error in qubit rotations due to Bloch-Siegert oscillation via the use of off-resonant Raman excitation

    International Nuclear Information System (INIS)

    Pradhan, Prabhakar; Cardoso, George C; Shahriar, M S

    2009-01-01

    The rotation of a quantum bit (qubit) is an important step in quantum computation. The rotation is generally performed using a Rabi oscillation. In a direct two-level qubit system, if the Rabi frequency is comparable to its resonance frequency, the rotating wave approximation is not valid, and the Rabi oscillation is accompanied by the so-called Bloch-Siegert oscillation (BSO) that occurs at twice the frequency of the driving field. One implication of the BSO is that for a given interaction time and Rabi frequency, the degree of rotation experienced by the qubit depends explicitly on the initial phase of the driving field. If this effect is not controlled, it leads to an apparent fluctuation in the rotation of the qubit. Here we show that when an off-resonant lambda system is used to realize a two-level qubit, the BSO is inherently negligible, thus eliminating this source of potential error.

  14. Performance Analysis of Amplify-and-Forward Two-Way Relaying with Co-Channel Interference and Channel Estimation Error

    KAUST Repository

    Liang Yang,

    2013-06-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we first consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation errors and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use them to evaluate the outage probability, error probability, and achievable rate. Subsequently, to investigate the joint effects of the channel estimation error and CCI on the system performance, we extend our analysis to a multiple-relay network and derive several asymptotic performance expressions. For comparison purposes, we also provide the analysis for the relay selection scheme under the total power constraint at the relays. For AF TWRN with channel estimation error and CCI, numerical results show that the performance of the relay selection scheme is not always better than that of the all-relay participating case. In particular, the relay selection scheme can improve the system performance in the case of high power levels at the sources and small powers at the relays.

  15. Quantitative developments in the cognitive reliability and error analysis method (CREAM) for the assessment of human performance

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Zio, Enrico; Librizzi, Massimo

    2006-01-01

    The current 'second generation' approaches in human reliability analysis focus their attention on the contextual conditions under which a given action is performed rather than on the notion of inherent human error probabilities, as was done in the earlier 'first generation' techniques. Among the 'second generation' methods, this paper considers the Cognitive Reliability and Error Analysis Method (CREAM) and proposes some developments with respect to a systematic procedure for computing probabilities of action failure. The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm which is here further extended to include uncertainty on the qualification of the conditions under which the action is performed and to account for the fact that the effects of the common performance conditions (CPCs) on performance reliability may not all be equal. By the proposed approach, the probability of action failure is estimated by rating the performance conditions in terms of their effect on the action

  16. PERFORMANCE DETERIORATION OF THERMOSIPHON SOLAR FLAT PLATE WATER HEATER DUE TO SCALING

    OpenAIRE

    arunachala umesh chandavar

    2011-01-01

    The performance of Flat plate Solar Water Heater deteriorates within five to twelve years of their installation due to factors related to manufacturing, operating conditions, lack of maintenance etc. Especially, problem due to scaling is significant as it is based on quality of water used. The remaining factors are system dependent and could be overcome by quality production. Software is developed by incorporating Hottel Whillier Bliss (H-W-B) equation to ascertain the effect of scaling o...

  17. The Effect of an Electronic Checklist on Critical Care Provider Workload, Errors, and Performance.

    Science.gov (United States)

    Thongprayoon, Charat; Harrison, Andrew M; O'Horo, John C; Berrios, Ronaldo A Sevilla; Pickering, Brian W; Herasevich, Vitaly

    2016-03-01

    The strategy used to improve effective checklist use in intensive care unit (ICU) setting is essential for checklist success. This study aimed to test the hypothesis that an electronic checklist could reduce ICU provider workload, errors, and time to checklist completion, as compared to a paper checklist. This was a simulation-based study conducted at an academic tertiary hospital. All participants completed checklists for 6 ICU patients: 3 using an electronic checklist and 3 using an identical paper checklist. In both scenarios, participants had full access to the existing electronic medical record system. The outcomes measured were workload (defined using the National Aeronautics and Space Association task load index [NASA-TLX]), the number of checklist errors, and time to checklist completion. Two independent clinician reviewers, blinded to participant results, served as the reference standard for checklist error calculation. Twenty-one ICU providers participated in this study. This resulted in the generation of 63 simulated electronic checklists and 63 simulated paper checklists. The median NASA-TLX score was 39 for the electronic checklist and 50 for the paper checklist (P = .005). The median number of checklist errors for the electronic checklist was 5, while the median number of checklist errors for the paper checklist was 8 (P = .003). The time to checklist completion was not significantly different between the 2 checklist formats (P = .76). The electronic checklist significantly reduced provider workload and errors without any measurable difference in the amount of time required for checklist completion. This demonstrates that electronic checklists are feasible and desirable in the ICU setting. © The Author(s) 2014.

  18. On the Performance of Free-Space Optical Systems over Generalized Atmospheric Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-01-01

    . Then capitalizing on these unified results, unified exact closed-form expressions for various performance metrics of FSO link transmission systems are offered, such as, the outage probability (OP), the higher-order amount of fading (AF), the average error rate

  19. Absorbed in the task : Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.

    2010-01-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently

  20. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  1. Performance-based gear metrology kinematic, transmission, error computation and diagnosis

    CERN Document Server

    Mark, William D

    2012-01-01

    A mathematically rigorous explanation of how manufacturing deviations and damage on the working surfaces of gear teeth cause transmission-error contributions to vibration excitations Some gear-tooth working-surface manufacturing deviations of significant amplitude cause negligible vibration excitation and noise, yet others of minuscule amplitude are a source of significant vibration excitation and noise.   Presently available computer-numerically-controlled dedicated gear metrology equipment can measure such error patterns on a gear in a few hours in sufficient detail to enable

  2. Performance of multi-service system with retrials due to blocking and called-party-busy

    DEFF Research Database (Denmark)

    Stepanov, S.N.; Kokina, O.A.; Iversen, Villy Bæk

    2008-01-01

    In this paper we construct a model of a multi-service system with an arbitrary number of bandwidth flow demands, taking into account retrials due to both blocking along the route and to called-party-busy. An approximate algorithm for estimation of key performance measures is proposed, and the pro......In this paper we construct a model of a multi-service system with an arbitrary number of bandwidth flow demands, taking into account retrials due to both blocking along the route and to called-party-busy. An approximate algorithm for estimation of key performance measures is proposed...

  3. Performance analysis of amplify-and-forward two-way relaying with co-channel interference and channel estimation error

    KAUST Repository

    Yang, Liang

    2013-04-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation error and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use these expressions to evaluate the outage probability and error probability. Numerical results show that the approximate closed-form expressions are very close to the exact ones. © 2013 IEEE.

  4. Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder

    Science.gov (United States)

    Hall, Steven T.; Post, Christopher J.

    2009-01-01

    Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…

  5. Risk of Performance and Behavioral Health Decrements Due to Inadequate Cooperation, Coordination, Communication, and Psychosocial Adaptation within a Team

    Science.gov (United States)

    Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.

    2015-01-01

    A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures

  6. Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.

    Science.gov (United States)

    Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P

    2015-12-14

    In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.

  7. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  8. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  9. Correction for ‘artificial’ electron disequilibrium due to cone-beam CT density errors: implications for on-line adaptive stereotactic body radiation therapy of lung

    International Nuclear Information System (INIS)

    Disher, Brandon; Hajdok, George; Craig, Jeff; Gaede, Stewart; Battista, Jerry J; Wang, An

    2013-01-01

    Cone-beam computed tomography (CBCT) has rapidly become a clinically useful imaging modality for image-guided radiation therapy. Unfortunately, CBCT images of the thorax are susceptible to artefacts due to scattered photons, beam hardening, lag in data acquisition, and respiratory motion during a slow scan. These limitations cause dose errors when CBCT image data are used directly in dose computations for on-line, dose adaptive radiation therapy (DART). The purpose of this work is to assess the magnitude of errors in CBCT numbers (HU), and determine the resultant effects on derived tissue density and computed dose accuracy for stereotactic body radiation therapy (SBRT) of lung cancer. Planning CT (PCT) images of three lung patients were acquired using a Philips multi-slice helical CT simulator, while CBCT images were obtained with a Varian On-Board Imaging system. To account for erroneous CBCT data, three practical correction techniques were tested: (1) conversion of CBCT numbers to electron density using phantoms, (2) replacement of individual CBCT pixel values with bulk CT numbers, averaged from PCT images for tissue regions, and (3) limited replacement of CBCT lung pixels values (LCT) likely to produce artificial lateral electron disequilibrium. For each corrected CBCT data set, lung SBRT dose distributions were computed for a 6 MV volume modulated arc therapy (VMAT) technique within the Philips Pinnacle treatment planning system. The reference prescription dose was set such that 95% of the planning target volume (PTV) received at least 54 Gy (i.e. D95). Further, we used the relative depth dose factor as an a priori index to predict the effects of incorrect low tissue density on computed lung dose in regions of severe electron disequilibrium. CT number profiles from co-registered CBCT and PCT patient lung images revealed many reduced lung pixel values in CBCT data, with some pixels corresponding to vacuum (−1000 HU). Similarly, CBCT data in a plastic lung

  10. Degradation of the performance of microchannel heat exchangers due to flow maldistribution

    DEFF Research Database (Denmark)

    Nielsen, Kaspar Kirstein; Engelbrecht, Kurt; Christensen, Dennis

    2012-01-01

    The effect of flow maldistribution on the performance of microchannel parallel plate heat exchangers is investigated using an established single blow numerical model and cyclic steady-state regenerator experiments. It is found that as the variation of the individual channel thickness...... in a particular stack (heat exchanger) increases the actual performance of the heat exchanger decreases significantly, deviating from the expected nominal performance. We show that this is due to both the varying fluid flow velocities in each individual channel and the thermal cross talk between the channels...

  11. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.

    2015-08-18

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  12. The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.

  13. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  14. Hesitation and error: Does product placement in an emergency department influence hand hygiene performance?

    Science.gov (United States)

    Stackelroth, Jenny; Sinnott, Michael; Shaban, Ramon Z

    2015-09-01

    Existing research has consistently demonstrated poor compliance by health care workers with hand hygiene standards. This study examined the extent to which incorrect hand hygiene occurred as a result of the inability to easily distinguish between different hand hygiene solutions placed at washbasins. A direct observational method was used using ceiling-mounted, motion-activated video camera surveillance in a tertiary referral emergency department in Australia. Data from a 24-hour period on day 10 of the recordings were collected into the Hand Hygiene-Technique Observation Tool based on Feldman's criteria as modified by Larson and Lusk. A total of 459 episodes of hand hygiene were recorded by 6 video cameras in the 24-hour period. The observed overall rate of error in this study was 6.2% (27 episodes). In addition an overall rate of hesitation was 5.8% (26 episodes). There was no statistically significant difference in error rates with the 2 hand washbasin configurations. The amelioration of causes of error and hesitation by standardization of the appearance and relative positioning of hand hygiene solutions at washbasins may translate in to improved hand hygiene behaviors. Placement of moisturizer at the washbasin may not be essential. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  15. Student Self-Assessment and Faculty Assessment of Performance in an Interprofessional Error Disclosure Simulation Training Program.

    Science.gov (United States)

    Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang

    2017-04-01

    Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.

  16. Influence of pitch, twist, and taper on a blade`s performance loss due to roughness

    Energy Technology Data Exchange (ETDEWEB)

    Tangler, J.L. [National Renewable Energy Laboratory, Golden, Colorado (United States)

    1997-08-01

    The purpose of this study was to determine the influence of blade geometric parameters such as pitch, twist, and taper on a blade`s sensitivity to leading edge roughness. The approach began with an evaluation of available test data of performance degradation due to roughness effects for several rotors. In addition to airfoil geometry, this evaluation suggested that a rotor`s sensitivity to roughness was also influenced by the blade geometric parameters. Parametric studies were conducted using the PROP computer code with wind-tunnel airfoil characteristics for smooth and rough surface conditions to quantify the performance loss due to roughness for tapered and twisted blades relative to a constant-chord, non-twisted blade at several blade pitch angles. The results indicate that a constant-chord, non-twisted blade pitched toward stall will have the greatest losses due to roughness. The use of twist, taper, and positive blade pitch angles all help reduce the angle-of-attack distribution along the blade for a given wind speed and the associated performance degradation due to roughness. (au)

  17. Influence of pitch, twist, and taper on a blade`s performance loss due to roughness

    Energy Technology Data Exchange (ETDEWEB)

    Tangler, J.L. [National Renewable Energy Lab., Golden, CO (United States)

    1996-12-31

    The purpose of this study was to determine the influence of blade geometric parameters such as pitch, twist, and taper on a blade`s sensitivity to leading edge roughness. The approach began with an evaluation of available test data of performance degradation due to roughness effects for several rotors. In addition to airfoil geometry, this evaluation suggested that a rotor`s sensitivity to roughness was also influenced by the blade geometric parameters. Parametric studies were conducted using the PROP computer code with wind-tunnel airfoil characteristics for smooth and rough surface conditions to quantify the performance loss due to roughness for tapered and twisted blades relative to a constant-chord, non-twisted blade at several blade pitch angles. The results indicate that a constant-chord, non-twisted blade pitched toward stall will have the greatest losses due to roughness. The use of twist, taper, and positive blade pitch angles all help reduce the angle-of-attack distribution along the blade for a given wind speed and the associated performance degradation due to roughness. 8 refs., 6 figs.

  18. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  19. Changes in Handset Performance Measures due to Spherical Radiation Pattern Measurement Uncertainty

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ødum; Pedersen, Gert Frølund

    An important characteristic of a mobile handset is its ability to receive and transmit power. One way to characterize the performance of a handset in this respect is to use measurements of the spherical radiation pattern from which the total radiated power (TRP), total isotropic sensitivity (TIS)...... with respect to the environment. Standard deviations up to about 0.5dB and a maximum deviation of about 1.6dB were found....... in the performance measures are investigated for both the GSM-900 and the GSM-1800 band. Despite the deliberately large deviations from the reference position, the changes in TRP and TIS are generally within ±0.5dB with a maximum of about 1.4dB. For the MEG values the results depend on the orientation of the handset...... system that may introduce errors in standardized performance measurements. Radiation patterns of six handsets have been measured while they were mounted at various offsets from the reference position defined by the Cellular Telecommunications & Internet Association (CTIA) certification. The change...

  20. Methods for the performance enhancement and the error characterization of large diameter ground-based diffractive telescopes.

    Science.gov (United States)

    Zhang, Haolin; Liu, Hua; Lizana, Angel; Xu, Wenbin; Caompos, Juan; Lu, Zhenwu

    2017-10-30

    This paper is devoted to the improvement of ground-based telescopes based on diffractive primary lenses, which provide larger aperture and relaxed surface tolerance compared to non-diffractive telescopes. We performed two different studies devised to thoroughly characterize and improve the performance of ground-based diffractive telescopes. On the one hand, we experimentally validated the suitability of the stitching error theory, useful to characterize the error performance of subaperture diffractive telescopes. On the other hand, we proposed a novel ground-based telescope incorporated in a Cassegrain architecture, leading to a telescope with enhanced performance. To test the stitching error theory, a 300 mm diameter, 2000 mm focal length transmissive stitching diffractive telescope, based on a three-belt subaperture primary lens, was designed and implemented. The telescope achieves a 78 cy/mm resolution within 0.15 degree field of view while the working wavelength ranges from 582.8 nm to 682.8 nm without any stitching error. However, the long optical track (35.49 m) introduces air turbulence that reduces the final images contrast in the ground-based test. To enhance this result, a same diameter compacted Cassegrain ground-based diffractive (CGD) telescope with the total track distance of 1.267 m, was implemented within the same wavelength. The ground-based CGD telescope provides higher resolution and better contrast than the transmissive configuration. Star and resolution tests were experimentally performed to compare the CGD and the transmissive configurations, providing the suitability of the proposed ground-based CGD telescope.

  1. Comparison of the effect of paper and computerized procedures on operator error rate and speed of performance

    International Nuclear Information System (INIS)

    Converse, S.A.; Perez, P.B.; Meyer, S.; Crabtree, W.

    1994-01-01

    The Computerized Procedures Manual (COPMA-II) is an advanced procedure manual that can be used to select and execute procedures, to monitor the state of plant parameters, and to help operators track their progress through plant procedures. COPMA-II was evaluated in a study that compared the speed and accuracy of operators' performance when they performed with COPMA-II and traditional paper procedures. Sixteen licensed reactor operators worked in teams of two to operate the Scales Pressurized Water Reactor Facility at North Carolina State University. Each team performed one change of power with each type of procedure to simulate performance under normal operating conditions. Teams then performed one accident scenario with COPMA-II and one with paper procedures. Error rates, performance times, and subjective estimates of workload were collected, and were evaluated for each combination of procedure type and scenario type. For the change of power task, accuracy and response time were not different for COPMA-II and paper procedures. Operators did initiate responses to both accident scenarios fastest with paper procedures. However, procedure type did not moderate response completion time for either accident scenario. For accuracy, performance with paper procedures resulted in twice as many errors as did performance with COPMA-II. Subjective measures of mental workload for the accident scenarios were not affected by procedure type

  2. The Effect of Exposure to High Noise Levels on the Performance and Rate of Error in Manual Activities.

    Science.gov (United States)

    Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra

    2016-03-01

    Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.

  3. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    Science.gov (United States)

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  5. Comparison of the errors in basic life support performance after training using the 2000 and 2005 ERC guidelines.

    Science.gov (United States)

    Owen, Andrew; Kocierz, Laura; Aggarwal, Naresh; Hulme, Jonathan

    2010-06-01

    The importance of immediate cardiopulmonary resuscitation (CPR) and defibrillation after cardiac arrest is established. The 2005 European Resuscitation Council (ERC) guidelines were altered to try to improve survival after cardiac arrest. This observational study compares the errors in basic life support (BLS) performance after training using the 2000 or 2005 guidelines. First-year healthcare students at the University of Birmingham, United Kingdom, were taught adult BLS in a standardised 8-h course: an historical group with previous ERC guidelines (Old), the other with 2005 ERC guidelines (New). 2537 (Old 1773; New 764) students were trained and assessed in BLS. There was no difference in overall error rate between Old and New (5.53% vs. 6.70% (p>0.05)) or adherence to the sequence of the respective BLS algorithm. The New group ("hands in centre of the chest") had significantly more erroneous hand positions compared to the Old group (5.23% vs. 1.64%, pERC guidelines do not significantly improve correct BLS performance. Removal of hand placement measurement results in a significant increase in hand position errors. The clinical benefit of an increased number of compressions impaired by worsened hand positioning is unknown and requires further study. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop Transmission Systems

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim; Yilmaz, Ferkan

    2013-01-01

    In this work, the performance analysis of a dual-hop relay transmission system composed of asymmetric radio-frequency (RF)/free-space optical (FSO) links with pointing errors is presented. More specifically, we build on the system model presented in [1] to derive new exact closed-form expressions for the cumulative distribution function, probability density function, moment generating function, and moments of the end-to-end signal-to-noise ratio in terms of the Meijer's G function. We then capitalize on these results to offer new exact closed-form expressions for the higher-order amount of fading, average error rate for binary and M-ary modulation schemes, and the ergodic capacity, all in terms of Meijer's G functions. Our new analytical results were also verified via computer-based Monte-Carlo simulation results.

  7. Performance Analysis of Multi-Hop Heterodyne FSO Systems over Malaga Turbulent Channels with Pointing Error Using Mixture Gamma Distribution

    KAUST Repository

    Alheadary, Wael Ghazy

    2017-11-16

    This work investigates the end-to-end performance of a free space optical amplify-and-forward relaying system using heterodyne detection over Malaga turbulence channels at the presence of pointing error. In order to overcome the analytical difficulties of the proposed composite channel model, we employed the mixture Gamma (MG) distribution. The proposed model shows a high accurate and tractable approximation just by adjusting some parameters. More specifically, we derived new closed-form expression for average bit error rate employing rectangular quadrature amplitude modulation in term of MG distribution and generalized power series of the Meijer\\'s G- function. The closed-form has been validated numerically and asymptotically at high signal to noise ratio.

  8. Performance analysis of multihop heterodyne free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2017-09-21

    This work investigates the end-to-end performance of a free space optical amplify-and-forward (AF) channel-state-information (CSI)-assisted relaying system using heterodyne detection over Malaga turbulence channels at the presence of pointing error employing rectangular quadrature amplitude modulation (R-QAM). More specifically, we present exact closed-form expressions for average bit-error rate for adaptive/non-adaptive modulation, achievable spectral efficiency, and ergodic capacity by utilizing generalized power series of Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work at high power regime. In addition, all the presented analytical results are illustrated using a selected set of numerical results. Moreover, we applied the bisection method to find the optimum beam width for the proposed FSO system.

  9. Impact of Pointing Errors on the Performance of Mixed RF/FSO Dual-Hop Transmission Systems

    KAUST Repository

    Ansari, Imran Shafique

    2013-02-20

    In this work, the performance analysis of a dual-hop relay transmission system composed of asymmetric radio-frequency (RF)/free-space optical (FSO) links with pointing errors is presented. More specifically, we build on the system model presented in [1] to derive new exact closed-form expressions for the cumulative distribution function, probability density function, moment generating function, and moments of the end-to-end signal-to-noise ratio in terms of the Meijer\\'s G function. We then capitalize on these results to offer new exact closed-form expressions for the higher-order amount of fading, average error rate for binary and M-ary modulation schemes, and the ergodic capacity, all in terms of Meijer\\'s G functions. Our new analytical results were also verified via computer-based Monte-Carlo simulation results.

  10. Errors in spectroscopic measurements of SO2 due to nonexponential absorption of laser radiation, with application to the remote monitoring of atmospheric pollutants

    International Nuclear Information System (INIS)

    Brassington, D.J.; Moncrieff, T.M.; Felton, R.C.; Jolliffe, B.W.; Marx, B.R.; Rowley, W.R.C.; Woods, P.T.

    1984-01-01

    Methods of measuring the concentration of atmospheric pollutants by laser absorption spectroscopy, such as differential absorption lidar (DIAL) and integrated long-path techniques, all rely on the validity of Beer's exponential absorption law. It is shown here that departures from this law occur if the probing laser has a bandwidth larger than the wavelength scale of structure in the absorption spectrum of the pollutant. A comprehensive experimental and theoretical treatment of the errors resulting from these departures is presented for the particular case of SO 2 monitoring at approx.300 nm. It is shown that the largest error occurs where the initial calibration measurement of absorption cross section is made at low pressure, in which case errors in excess of 5% in the cross section could occur for laser bandwidths >0.01 nm. Atmospheric measurements by DIAL or long-path methods are in most cases affected less, because pressure broadening smears the spectral structure, but when measuring high concentrations errors can exceed 5%

  11. Measuring uncertainty in dose delivered to the cochlea due to setup error during external beam treatment of patients with cancer of the head and neck

    Energy Technology Data Exchange (ETDEWEB)

    Yan, M.; Lovelock, D.; Hunt, M.; Mechalakos, J.; Hu, Y.; Pham, H.; Jackson, A., E-mail: jacksona@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065 (United States)

    2013-12-15

    Purpose: To use Cone Beam CT scans obtained just prior to treatments of head and neck cancer patients to measure the setup error and cumulative dose uncertainty of the cochlea. Methods: Data from 10 head and neck patients with 10 planning CTs and 52 Cone Beam CTs taken at time of treatment were used in this study. Patients were treated with conventional fractionation using an IMRT dose painting technique, most with 33 fractions. Weekly radiographic imaging was used to correct the patient setup. The authors used rigid registration of the planning CT and Cone Beam CT scans to find the translational and rotational setup errors, and the spatial setup errors of the cochlea. The planning CT was rotated and translated such that the cochlea positions match those seen in the cone beam scans, cochlea doses were recalculated and fractional doses accumulated. Uncertainties in the positions and cumulative doses of the cochlea were calculated with and without setup adjustments from radiographic imaging. Results: The mean setup error of the cochlea was 0.04 ± 0.33 or 0.06 ± 0.43 cm for RL, 0.09 ± 0.27 or 0.07 ± 0.48 cm for AP, and 0.00 ± 0.21 or −0.24 ± 0.45 cm for SI with and without radiographic imaging, respectively. Setup with radiographic imaging reduced the standard deviation of the setup error by roughly 1–2 mm. The uncertainty of the cochlea dose depends on the treatment plan and the relative positions of the cochlea and target volumes. Combining results for the left and right cochlea, the authors found the accumulated uncertainty of the cochlea dose per fraction was 4.82 (0.39–16.8) cGy, or 10.1 (0.8–32.4) cGy, with and without radiographic imaging, respectively; the percentage uncertainties relative to the planned doses were 4.32% (0.28%–9.06%) and 10.2% (0.7%–63.6%), respectively. Conclusions: Patient setup error introduces uncertainty in the position of the cochlea during radiation treatment. With the assistance of radiographic imaging during setup

  12. Effects on Performance and Work Quality due to Low Frequency Ventilation Noise

    Science.gov (United States)

    Persson Waye, K.; Rylander, R.; Benton, S.; Leventhall, H. G.

    1997-08-01

    A pilot study was carried out to assess method evaluating effects of low frequency noise on performance. Of special interest was to study objective and subjective effects over time. Two ventilation noises were used, one of a predominantly mid frequency character and the other of a predominantly low frequency character. Both had an NC value of 35. For the study, 50 students were recruited and 30 selected on the basis of subjective reports of pressure on the eardrum after exposure to a low frequency noise. Of these, 14 randomly selected subjects aged 21 and 34 took part. The subjects performed three computerized cognitive tests in the mid frequency or the low frequency noise condition alternatively. Tests I and II were performed together with a secondary task.Questionnaires were used to evaluate subjective symptoms, effects on mood and estimated interference with the test results due to temperature, light and noise. The results showed that the subjective estimations of noise interference with performance were higher for the low frequency noise (psocial orientation (pstudied. The results further indicate that the NC curves do not fully assess the negative effects of low frequency noise on work performance.

  13. Distribucijske greške u procesu procjene performansi zaposlenih/Distribution errors in the employee performance evaluation process

    Directory of Open Access Journals (Sweden)

    Vesko M. Lukovac

    2014-10-01

    , aimed at improving performance. Since the employee performance evaluation is most often the result of subjective judgment of an evaluator about the quality of his/her work, one has to take into account possible errors that characterize such a way of judging. There are several types of errors that an evaluator can commit during the performance evaluation process, and in this paper, an approach to the identification and reduction of distribution errors of evaluators, most widely spread in organizations with a large number of employees, has been presented.

  14. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  15. Analyzing Reliability and Performance Trade-Offs of HLS-Based Designs in SRAM-Based FPGAs Under Soft Errors

    Science.gov (United States)

    Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.

    2017-02-01

    The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.

  16. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  17. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  18. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    -in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important

  19. Identification and Assessment of Human Error Due to design in damagingto the Sour Water Equipment and SRP Unit of Control Room in A Refinery Plant using SHERPA Technique

    Directory of Open Access Journals (Sweden)

    2013-02-01

    Conclusion: To prevent and control occurring each of the identified errors and to limit the consequences of them, appropriate counter measures such as proper control measures in the form of changes in design, including install the appropriate colored tag, digital indicator and warning lights which must be used base on the kind of system consequently, of this study showed that SHEPA can be an efficientmethod to study humanness in operational site.

  20. An FDTD Study of Errors in Magnetic Direction Finding of Lightning Due to the Presence of Conducting Structure Near the Field Measuring Station

    Directory of Open Access Journals (Sweden)

    Yosuke Suzuki

    2016-07-01

    Full Text Available Lightning electromagnetic fields in the presence of conducting (grounded structure having a height of 60 m and a square cross-section of 40 m × 40 m within about 100 m of the observation point are analyzed using the 3D finite-difference time-domain (FDTD method. Influence of the conducting structure on the two orthogonal components of magnetic field is analyzed, and resultant errors in the estimated lightning azimuth are evaluated. Influences of ground conductivity and lightning current waveshape parameters are also examined. When the azimuth vector passes through the center of conducting structure diagonally (e.g., azimuth angle is 45° or parallel to its walls (e.g., azimuth angle is 0°, the presence of conducting structure equally influences Hx and Hy, so that Hx/Hy is the same as in the absence of structure. Therefore, no azimuth error occurs in those configurations. When the conducting structure is not located on the azimuth vector, the structure influences Hx and Hy differently, with the resultant direction finding error being greater when the structure is located closer to the observation point.

  1. Team Performance and Error Management in Chinese and American Simulated Flight Crews: The Role of Cultural and Individual Differences

    Science.gov (United States)

    Davis, Donald D.; Bryant, Janet L.; Tedrow, Lara; Liu, Ying; Selgrade, Katherine A.; Downey, Heather J.

    2005-01-01

    This report describes results of a study conducted for NASA-Langley Research Center. This study is part of a program of research conducted for NASA-LARC that has focused on identifying the influence of national culture on the performance of flight crews. We first reviewed the literature devoted to models of teamwork and team performance, crew resource management, error management, and cross-cultural psychology. Davis (1999) reported the results of this review and presented a model that depicted how national culture could influence teamwork and performance in flight crews. The second study in this research program examined accident investigations of foreign airlines in the United States conducted by the National Transportation Safety Board (NTSB). The ability of cross-cultural values to explain national differences in flight outcomes was examined. Cultural values were found to covary in a predicted way with national differences, but the absence of necessary data in the NTSB reports and limitations in the research method that was used prevented a clear understanding of the causal impact of cultural values. Moreover, individual differences such as personality traits were not examined in this study. Davis and Kuang (2001) report results of this second study. The research summarized in the current report extends this previous research by directly assessing cultural and individual differences among students from the United States and China who were trained to fly in a flight simulator using desktop computer workstations. The research design used in this study allowed delineation of the impact of national origin, cultural values, personality traits, cognitive style, shared mental model, and task workload on teamwork, error management and flight outcomes. We briefly review the literature that documents the importance of teamwork and error management and its impact on flight crew performance. We next examine teamwork and crew resource management training designed to improve

  2. Human Error Probabilites (HEPs) for generic tasks and Performance Shaping Factors (PSFs) selected for railway operations

    DEFF Research Database (Denmark)

    Thommesen, Jacob; Andersen, Henning Boje

    This report describes an HRA (Human Reliability Assessment) of six generic tasks and four Perfor-mance Shaping Factors (PSFs) targeted at railway operations commissioned by Banedanmark. The selection and characterization of generic tasks and PSFs are elaborated by DTU Management in close...

  3. On the Performance of Multihop Heterodyne FSO Systems With Pointing Errors

    KAUST Repository

    Zedini, Emna; Alouini, Mohamed-Slim

    2015-01-01

    This paper reports the end-to-end performance analysis of a multihop free-space optical system with amplify-and-forward (AF) channel-state-information (CSI)-assisted or fixed-gain relays using heterodyne detection over Gamma–Gamma turbulence fading

  4. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  5. Risk of Performance Decrement and Crew Illness Due to an Inadequate Food System

    Science.gov (United States)

    Douglas, Grace L.; Cooper, Maya; Bermudez-Aguirre, Daniela; Sirmons, Takiyah

    2016-01-01

    NASA is preparing for long duration manned missions beyond low-Earth orbit that will be challenged in several ways, including long-term exposure to the space environment, impacts to crew physiological and psychological health, limited resources, and no resupply. The food system is one of the most significant daily factors that can be altered to improve human health, and performance during space exploration. Therefore, the paramount importance of determining the methods, technologies, and requirements to provide a safe, nutritious, and acceptable food system that promotes crew health and performance cannot be underestimated. The processed and prepackaged food system is the main source of nutrition to the crew, therefore significant losses in nutrition, either through degradation of nutrients during processing and storage or inadequate food intake due to low acceptability, variety, or usability, may significantly compromise the crew's health and performance. Shelf life studies indicate that key nutrients and quality factors in many space foods degrade to concerning levels within three years, suggesting that food system will not meet the nutrition and acceptability requirements of a long duration mission beyond low-Earth orbit. Likewise, mass and volume evaluations indicate that the current food system is a significant resource burden. Alternative provisioning strategies, such as inclusion of bioregenerative foods, are challenged with resource requirements, and food safety and scarcity concerns. Ensuring provisioning of an adequate food system relies not only upon determining technologies, and requirements for nutrition, quality, and safety, but upon establishing a food system that will support nutritional adequacy, even with individual crew preference and self-selection. In short, the space food system is challenged to maintain safety, nutrition, and acceptability for all phases of an exploration mission within resource constraints. This document presents the

  6. GASNet-EX Performance Improvements Due to Specialization for the Cray Aries Network

    Energy Technology Data Exchange (ETDEWEB)

    Hargrove, Paul H.; Bonachea, Dan

    2018-03-27

    This document is a deliverable for milestone STPM17-6 of the Exascale Computing Project, delivered by WBS 2.3.1.14. It reports on the improvements in performance observed on Cray XC-series systems due to enhancements made to the GASNet-EX software. These enhancements, known as “specializations”, primarily consist of replacing network-independent implementations of several recently added features with implementations tailored to the Cray Aries network. Performance gains from specialization include (1) Negotiated-Payload Active Messages improve bandwidth of a ping-pong test by up to 14%, (2) Immediate Operations reduce running time of a synthetic benchmark by up to 93%, (3) non-bulk RMA Put bandwidth is increased by up to 32%, (4) Remote Atomic performance is 70% faster than the reference on a point-to-point test and allows a hot-spot test to scale robustly, and (5) non-contiguous RMA interfaces see up to 8.6x speedups for an intra-node benchmark and 26% for inter-node. These improvements are available in the GASNet-EX 2018.3.0 release.

  7. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    Science.gov (United States)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for

  8. River-bed erosion due to changing boundary conditions: performance of a protective measure

    Directory of Open Access Journals (Sweden)

    D. Termini

    2014-09-01

    Full Text Available Due to the introduction of man-made sediment barriers along a river, the amount of sediment load entering the downstream river reach is different to that leaving the reach, and erosion processes occur downstream of the barrier itself. Designers are often required to take into account the scouring process and to include adequate protective measures against the local scour. This paper addresses the performance of bio-engineering protective measures against the erosion process. In particular, a green carpet, realized with real flexible vegetation, has been used as the protective measure against erosion processes downstream of a rigid bed. Analyses are based on experimental work carried out in a straight channel constructed at the laboratory of the Dipartimento di Ingegneria Civile, Ambientale, Aereospaziale, dei Materiali, Palermo University (Italy.

  9. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  10. On the Performance of Free-Space Optical Systems over Generalized Atmospheric Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-03-01

    Generalized fading has been an imminent part and parcel of wireless communications. It not only characterizes the wireless channel appropriately but also allows its utilization for further performance analysis of various types of wireless communication systems. Under the umbrella of generalized fading channels, a unified performance analysis of a free-space optical (FSO) link over the Malaga (M) atmospheric turbulence channel that accounts for pointing errors and both types of detection techniques (i.e. indirect modulation/direct detection (IM/DD) as well as heterodyne detection) is presented. Specifically, unified exact closed-form expressions for the probability density function (PDF), the cumulative distribution function (CDF), the moment generating function (MGF), and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system are presented, all in terms of the Meijer\\'s G function except for the moments that is in terms of simple elementary functions. Then capitalizing on these unified results, unified exact closed-form expressions for various performance metrics of FSO link transmission systems are offered, such as, the outage probability (OP), the higher-order amount of fading (AF), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where closed-form lower bound results are presented), all in terms of Meijer\\'s G functions except for the higher-order AF that is in terms of simple elementary functions. Additionally, the asymptotic results are derived for all the expressions derived earlier in terms of the Meijer\\'s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer\\'s G function. Furthermore, new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes are derived in terms of simple elementary functions via utilizing moments. All the presented results are

  11. Technical Note: Regularization performances with the error consistency method in the case of retrieved atmospheric profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2007-01-01

    Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.

  12. Evaluation of sealing performance of metal cask subjected to vertical impact load due to aircraft engine

    International Nuclear Information System (INIS)

    Namba, Kosuke; Shirai, Koji; Saegusa, Toshiari

    2010-01-01

    To confirm the sealing performance of a metal cask subjected to impact force due to commercial aircraft crash against a spent fuel storage facility, a vertical impact test was carried out. In this test, a simplified deformable missile was used by considering the rigidity of the actual aircraft engine and accelerated to the specified impact velocity (60 m/s) to hit the full-scale lid structure with the primary and secondary lids. Then, the leak rate, the inner pressure between the lids, and the displacement of the lids were measured. The leak rate of the secondary lid exceeded 1.0x10 -3 Pa·m 3 /s upon impact. However, because no residual lid opening displacement occurred after loading, the leak rate recovered to less than 1.0x10 -6 Pa·m 3 /s after 3 h from the impact test. In addition, to clarify the impact behaviour of the lid structure, the impact analysis using the LS-DYNA code was executed. It was found that the lid bolts maintained the good tightening force after impact loading, and the sealing performance of the full-scale metal cask would not be affected immediately by the vertical impact of the aircraft engine with a speed of 60 m/s. (author)

  13. Impaired Performance of Pressure-Retarded Osmosis due to Irreversible Biofouling.

    Science.gov (United States)

    Bar-Zeev, Edo; Perreault, François; Straub, Anthony P; Elimelech, Menachem

    2015-11-03

    Next-generation pressure-retarded osmosis (PRO) approaches aim to harness the energy potential of streams with high salinity differences, such as wastewater effluent and seawater desalination plant brine. In this study, we evaluated biofouling propensity in PRO. Bench-scale experiments were carried out for 24 h using a model wastewater effluent feed solution and simulated seawater desalination brine pressurized to 24 bar. For biofouling tests, wastewater effluent was inoculated with Pseudomonas aeruginosa and artificial seawater desalination plant brine draw solution was seeded with Pseudoalteromonas atlantica. Our results indicate that biological growth in the feed wastewater stream channel severely fouled both the membrane support layer and feed spacer, resulting in ∼50% water flux decline. We also observed an increase in the pumping pressure required to force water through the spacer-filled feed channel, with pressure drop increasing from 6.4±0.8 bar m(-1) to 15.1±2.6 bar m(-1) due to spacer blockage from the developing biofilm. Neither the water flux decline nor the increased pressure drop in the feed channel could be reversed using a pressure-aided osmotic backwash. In contrast, biofouling in the seawater brine draw channel was negligible. Overall, the reduced performance due to water flux decline and increased pumping energy requirements from spacer blockage highlight the serious challenges of using high fouling potential feed sources in PRO, such as secondary wastewater effluent. We conclude that PRO power generation using wastewater effluent and seawater desalination plant brine may become possible only with rigorous pretreatment or new spacer and membrane designs.

  14. On the performance of mixed RF/FSO variable gain dual-hop transmission systems with pointing errors

    KAUST Repository

    Ansari, Imran Shafique

    2013-09-01

    In this work, the performance analysis of a dualhop relay transmission system composed of asymmetric radiofrequency (RF) and unified free-space optical (FSO) links subject to pointing errors is presented. These unified FSO links account for both types of detection techniques (i.e. indirect modulation/ direct detection (IM/DD) as well as heterodyne detection). More specifically, we derive new exact closed-form expressions for the cumulative distribution function, probability density function, moment generating function, and moments of the end-to-end signal-to-noise ratio of these systems in terms of the Meijer\\'s G function. We then capitalize on these results to offer new exact closed-form expressions for the outage probability, higherorder amount of fading, average error rate for binary and Mary modulation schemes, and ergodic capacity, all in terms of Meijer\\'s G functions. All our new analytical results are verified via computer-based Monte-Carlo simulations. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc.

  15. Impact of unilateral conductive hearing loss due to aural atresia on academic performance in children.

    Science.gov (United States)

    Kesser, Bradley W; Krook, Kaelyn; Gray, Lincoln C

    2013-09-01

    This study evaluates the effect of unilateral conductive hearing loss secondary to aural atresia on elementary school children's academic performance. Case control survey and review of audiometric data. One hundred thirty-two surveys were mailed to families of children with aural atresia, and 48 surveys were sent to families of children with unilateral sensorineural hearing loss (SNHL) to identify rates of grade retention, use of any resource, and behavioral problems. Audiometric data of the cohort were tabulated. Of the 40 atresia patients, none repeated a grade, but 65% needed some resources: 12.5% currently use a hearing aid, 32.5% use(d) a frequency-modulated system in school, 47.5% had an Individualized Education Plan, and 45% utilized speech therapy. Compared to the unilateral SNHL group and a cohort of children with unilateral SNHL in an earlier study, children with unilateral atresia were less likely to repeat a grade. Children in both unilateral atresia and SNHL groups were more likely to utilize some resource in the academic setting compared to the unilateral SNHL children in the prior study. Unilateral conductive hearing loss due to aural atresia has an impact on academic performance in children, although not as profound when compared to children with unilateral SNHL. The majority of these children with unilateral atresia utilize resources in the school setting. Parents, educators, and health care professionals should be aware of the impact of unilateral conductive hearing loss and offer appropriate habilitative services. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  16. Post-error expression of speed and force while performing a simple, monotonous task with a haptic pen

    NARCIS (Netherlands)

    Bruns, M.; Keyson, D.V.; Jabon, M.E.; Hummels, C.C.M.; Hekkert, P.P.M.; Bailenson, J.N.

    2013-01-01

    Control errors often occur in repetitive and monotonous tasks, such as manual assembly tasks. Much research has been done in the area of human error identification; however, most existing systems focus solely on the prediction of errors, not on increasing worker accuracy. The current study examines

  17. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  18. Identifying subassemblies by ultrasound to prevent fuel handling error in sodium fast reactors: First test performed in water

    International Nuclear Information System (INIS)

    Paumel, Kevin; Lhuillier, Christian

    2015-01-01

    Identifying subassemblies by ultrasound is a method that is being considered to prevent handling errors in sodium fast reactors. It is based on the reading of a code (aligned notches) engraved on the subassembly head by an emitting/receiving ultrasonic sensor. This reading is carried out in sodium with high temperature transducers. The resulting one-dimensional C-scan can be likened to a binary code expressing the subassembly type and number. The first test performed in water investigated two parameters: width and depth of the notches. The code remained legible for notches as thin as 1.6 mm wide. The impact of the depth seems minor in the range under investigation. (authors)

  19. A Unified Performance Analysis of Free-Space Optical Links over Gamma-Gamma Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2013-11-13

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection as well as heterodyne detection). More specifically, we present unified exact closed-form expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer\\'s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the higher-order amount of fading (AF), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity, all in terms of Meijer\\'s G functions except for the higher-order AF that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer\\'s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer\\'s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  20. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  1. PERFORMANCE DETERIORATION OF THERMOSIPHON SOLAR FLAT PLATE WATER HEATER DUE TO SCALING

    Directory of Open Access Journals (Sweden)

    arunachala umesh chandavar

    2011-12-01

    Full Text Available 0 0 1 340 1943 International Islamic University 16 4 2279 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman";} The performance of Flat plate Solar Water Heater deteriorates within five to twelve years of their installation due to factors related to manufacturing, operating conditions, lack of maintenance etc. Especially, problem due to scaling is significant as it is based on quality of water used. The remaining factors are system dependent and could be overcome by quality production. Software is developed by incorporating Hottel Whillier Bliss (H-W-B equation to ascertain the effect of scaling on system efficiency in case of thermosiphon system. In case of clean thermosiphon system, the instantaneous efficiency calculated at 1000 W/m2 radiation is 72 % and it drops to 46 % for 3.7 mm scale thickness. The mass flow rate is reduced by 90 % for 3.7 mm scale thickness. Whereas, the average temperature drop of water in the tank is not critical due to considerable heat content in water under severe scaled condition.  But practically in case of major scale growth, some of the risers are likely to get blocked completely which leads to negligible temperature rise in the tank. ABSTRAK: Prestasi plat rata pemanas air suria merosot selepas lima hingga dua belas tahun  pemasangannya disebabkan faktor-faktor yang berkaitan dengan pembuatannya, cara kendaliannya, kurangnya penyelenggaraan dan sebagainya.  Terutama sekali, masalah disebabkan scaling (tembunan endapan mineral perlu diambil berat kerana ianya bergantung kepada kualiti air yang digunakan. Faktor-faktor selebihya bersandarkan sistem dan ia

  2. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L. [Pacific Science and Engineering Group, San Diego, CA (United States)] [and others

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.

  3. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    International Nuclear Information System (INIS)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error

  4. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  5. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  6. Performance monitoring in the anterior cingulate is not all error related: expectancy deviation and the representation of action-outcome associations.

    Science.gov (United States)

    Oliveira, Flavio T P; McDonald, John J; Goodman, David

    2007-12-01

    Several converging lines of evidence suggest that the anterior cingulate cortex (ACC) is selectively involved in error detection or evaluation of poor performance. Here we challenge this notion by presenting event-related potential (ERP) evidence that the feedback-elicited error-related negativity, an ERP component attributed to the ACC, can be elicited by positive feedback when a person is expecting negative feedback and vice versa. These results suggest that performance monitoring in the ACC is not limited to error processing. We propose that the ACC acts as part of a more general performance-monitoring system that is activated by violations in expectancy. Further, we propose that the common observation of increased ACC activity elicited by negative events could be explained by an overoptimistic bias in generating expectations of performance. These results could shed light into neurobehavioral disorders, such as depression and mania, associated with alterations in performance monitoring and also in judgments of self-related events.

  7. Performance Improvement of Membrane Stress Measurement Equipment through Evaluation of Added Mass of Membrane and Error Correction

    Directory of Open Access Journals (Sweden)

    Sang-Wook Jin

    2017-01-01

    Full Text Available One of the most important issues in keeping membrane structures in stable condition is to maintain the proper stress distribution over the membrane. However, it is difficult to determine the quantitative real stress level in the membrane after the completion of the structure. The stress relaxation phenomenon of the membrane and the fluttering effect due to strong wind or ponding caused by precipitation may cause severe damage to the membrane structure itself. Therefore, it is very important to know the magnitude of the existing stress in membrane structures for their maintenance. The authors have proposed a new method for separately estimating the membrane stress in two different directions using sound waves instead of directly measuring the membrane stress. The new method utilizes the resonance phenomenon of the membrane, which is induced by sound excitations given through an audio speaker. During such experiment, the effect of the surrounding air on the vibrating membrane cannot be overlooked in order to assure high measurement precision. In this paper, an evaluation scheme for the added mass of membrane with the effect of air on the vibrating membrane and the correction of measurement error is discussed. In addition, three types of membrane materials are used in the experiment in order to verify the expandability and accuracy of the membrane measurement equipment.

  8. Effect of Tracking Error of Double-Axis Tracking Device on the Optical Performance of Solar Dish Concentrator

    Directory of Open Access Journals (Sweden)

    Jian Yan

    2018-01-01

    Full Text Available In this paper, a flux distribution model of the focal plane in dish concentrator system has been established based on ray tracking method. This model was adopted for researching the influence of the mirror slope error, solar direct normal irradiance, and tracking error of elevation-azimuth tracking device (EATD on the focal spot characteristics (i.e., flux distribution, geometrical shape, centroid position, and intercept factor. The tracking error transmission law of the EATD transferred to dish concentrator was also studied. The results show that the azimuth tracking error of the concentrator decreases with the increase of the concentrator elevation angle and it decreases to 0 mrad when the elevation angle is 90°. The centroid position of focal spot along x-axis and y-axis has linear relationship with azimuth and elevation tracking error of EATD, respectively, which could be used to evaluate and calibrate the tracking error of the dish concentrator. Finally, the transmission law of the EATD azimuth tracking error in solar heliostats is analyzed, and a dish concentrator using a spin-elevation tracking device is proposed, which can reduce the effect of spin tracking error on the dish concentrator. This work could provide fundamental for manufacturing precision allocation of tracking devices and developing a new type of tracking device.

  9. Diagnostic Error of a Patient with Combined Inherited Factor VII and Factor X Deficiency due to Accidental Ingestion of a Diphacinone Rodenticide.

    Science.gov (United States)

    Li, Min; Jin, Yanhui; Wang, Mingshan; Xie, Yaosheng; Ding, Hongxiang

    2016-11-01

    To explore the characteristics of laboratory examination and confirm the diagnosis of a patient with combined inherited FVII and FX deficiency after he ingested diphacinone rodenticide accidentally. The coagulant parameter screening tests and coagulation factor activities were tested many times in the patient due to accidental ingestion of a diphacinone rodenticide. After the patient was treated for more than one year, gene analysis of correlated coagulation factors was analyzed in the patient and other family members by DNA direct sequencing. 106 persons were selected as controls from routine health examinations. After the patient was admitted to hospital, routine coagulation screening tests revealed the prolonged prothrombin time (PT) and activated partial thromboplastin time (APTT) and low levels of vitamin K-dependent coagulation factors (FII, FVII, FIX, FX) activity, which was 102.4 seconds, 88.5 seconds, 7%, 3%, 8%, and 2%, respectively. During more than one year of treatment, the value of PT and APTT still showed significantly prolonged activity and FVII and FX activity levels were about 5%. While FII and FIX activity levels were in the normal range after 12 weeks of treatment. Two homozygous mutations, g.11267C>T of F7 gene resulting in the substitution Arg277Cys and g.28139G>T of F10 gene leading to the substitution Val384Phe, were identified in the patient. The patient's parents and sister was heterozygous for Arg277Cys and Val384Phe mutations. FVII and FX antigen levels in the patient were 7% and 30%, respectively. There were many similarities in the characteristics of laboratory examination between combined inherited FVII and FX deficiency and acquired vitamin K deficiency. The best way to identify them was gene analysis.

  10. Five-Year-Olds’ Systematic Errors in Second-Order False Belief Tasks Are Due to First-Order Theory of Mind Strategy Selection: A Computational Modeling Study

    Science.gov (United States)

    Arslan, Burcu; Taatgen, Niels A.; Verbrugge, Rineke

    2017-01-01

    The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback “Wrong,” they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children’s failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy. PMID:28293206

  11. Five-Year-Olds' Systematic Errors in Second-Order False Belief Tasks Are Due to First-Order Theory of Mind Strategy Selection: A Computational Modeling Study.

    Science.gov (United States)

    Arslan, Burcu; Taatgen, Niels A; Verbrugge, Rineke

    2017-01-01

    The focus of studies on second-order false belief reasoning generally was on investigating the roles of executive functions and language with correlational studies. Different from those studies, we focus on the question how 5-year-olds select and revise reasoning strategies in second-order false belief tasks by constructing two computational cognitive models of this process: an instance-based learning model and a reinforcement learning model. Unlike the reinforcement learning model, the instance-based learning model predicted that children who fail second-order false belief tasks would give answers based on first-order theory of mind (ToM) reasoning as opposed to zero-order reasoning. This prediction was confirmed with an empirical study that we conducted with 72 5- to 6-year-old children. The results showed that 17% of the answers were correct and 83% of the answers were wrong. In line with our prediction, 65% of the wrong answers were based on a first-order ToM strategy, while only 29% of them were based on a zero-order strategy (the remaining 6% of subjects did not provide any answer). Based on our instance-based learning model, we propose that when children get feedback "Wrong," they explicitly revise their strategy to a higher level instead of implicitly selecting one of the available ToM strategies. Moreover, we predict that children's failures are due to lack of experience and that with exposure to second-order false belief reasoning, children can revise their wrong first-order reasoning strategy to a correct second-order reasoning strategy.

  12. Performance of multi-junction cells due to illumination distribution across the cell surface

    International Nuclear Information System (INIS)

    Schultz, R.D.; Vorster, F.J; Dyk, E.E van

    2012-01-01

    This paper addresses the influence of illumination distribution on the performance of a high concentration photovoltaic (HCPV) module. CPV systems comprise of optical elements as well as mechanical tracking to concentrate the solar flux onto the solar receiver as well as to keep the system on track with the sun. The performance of the subcells of the multi-junction concentrator cell depends on the optical alignment of the system. Raster scanning of the incident intensity in the optical plane of the receiver and corresponding I–V measurements were used to investigate the influence of illumination distribution on performance. The results show that the illumination distribution that differs between cells does affect the performance of the module. The performance of the subcells of the multi-junction concentrator cell also depends on the optical alignment of the system.

  13. Performance of multi-junction cells due to illumination distribution across the cell surface

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, R.D., E-mail: s206029578@live.nmmu.ac.za [Nelson Mandela University, Physics Department, P.O. Box 77000, 6031, Port Elizabeth (South Africa); Vorster, F.J; Dyk, E.E van [Nelson Mandela University, Physics Department, P.O. Box 77000, 6031, Port Elizabeth (South Africa)

    2012-05-15

    This paper addresses the influence of illumination distribution on the performance of a high concentration photovoltaic (HCPV) module. CPV systems comprise of optical elements as well as mechanical tracking to concentrate the solar flux onto the solar receiver as well as to keep the system on track with the sun. The performance of the subcells of the multi-junction concentrator cell depends on the optical alignment of the system. Raster scanning of the incident intensity in the optical plane of the receiver and corresponding I-V measurements were used to investigate the influence of illumination distribution on performance. The results show that the illumination distribution that differs between cells does affect the performance of the module. The performance of the subcells of the multi-junction concentrator cell also depends on the optical alignment of the system.

  14. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  15. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  16. Joint adaptive modulation and diversity combining with feedback error compensation

    KAUST Repository

    Choi, Seyeong

    2009-11-01

    This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.

  17. Joint adaptive modulation and diversity combining with feedback error compensation

    KAUST Repository

    Choi, Seyeong; Hong-Chuan, Yang; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2009-01-01

    This letter investigates the effect of feedback error on the performance of the joint adaptive modulation and diversity combining (AMDC) scheme which was previously studied with an assumption of error-free feedback channels. We also propose to utilize adaptive diversity to compensate for the performance degradation due to feedback error. We accurately quantify the performance of the joint AMDC scheme in the presence of feedback error, in terms of the average number of combined paths, the average spectral efficiency, and the average bit error rate. Selected numerical examples are presented and discussed to illustrate the effectiveness of the proposed feedback error compensation strategy with adaptive combining. It is observed that the proposed compensation strategy can offer considerable error performance improvement with little loss in processing power and spectral efficiency in comparison with the no compensation case. Copyright © 2009 IEEE.

  18. Performance analysis of multihop heterodyne free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy; Park, Kihong; Alouini, Mohamed-Slim

    2017-01-01

    employing rectangular quadrature amplitude modulation (R-QAM). More specifically, we present exact closed-form expressions for average bit-error rate for adaptive/non-adaptive modulation, achievable spectral efficiency, and ergodic capacity by utilizing

  19. [Study on the Effects and Compensation Effect of Recording Parameters Error on Imaging Performance of Holographic Grating in On-Line Spectral Diagnose].

    Science.gov (United States)

    Jiang, Yan-xiu; Bayanheshig; Yang, Shuo; Zhao, Xu-long; Wu, Na; Li, Wen-hao

    2016-03-01

    To making the high resolution grating, a numerical calculation was used to analyze the effect of recording parameters on groove density, focal curve and imaging performance of the grating and their compensation. Based on Fermat' s principle, light path function and aberration, the effect on imaging performance of the grating was analyzed. In the case of fixed using parameters, the error of the recording angle has a greater influence on imaging performance, therefore the gain of the weight of recording angle can improve the accuracy of the recording angle values in the optimization; recording distance has little influence on imaging performance; the relative errors of recording parameters cause the change of imaging performance of the grating; the results indicate that recording parameter errors can be compensated by adjusting its corresponding parameter. The study can give theoretical guidance to the fabrication for high resolution varied-line-space plane holographic grating in on-line spectral diagnostic and reduce the alignment difficulty by analyze the main error effect the imaging performance and propose the compensation method.

  20. Can We Predict Cognitive Performance Decrements Due to Sleep Loss and the Recuperative Effects of Caffeine

    Science.gov (United States)

    2015-10-14

    the integrated UMP would provide another step toward the development of a wearable computer-based system or smartphone app that considers an......mail.mil ABSTRACT Warfighters are often subjected to challenging sleep/wake schedules that hinder their cognitive performance. Countermeasures

  1. Evidence Report: Risk of Injury and Compromised Performance due to EVA Operations

    Science.gov (United States)

    Chappell, Steven P.; Norcross, Jason R.; Abercromby, Andrew F. J.; Bekdash, Omar S.; Benson, Elizabeth A.; Jarvis, Sarah L.; Conkin, Johnny; Gernhardt, Michael L.; House, Nancy; Jadwick, Jennifer; hide

    2017-01-01

    Given the high physiological and functional demands of operating in a self-contained EVA or training suit in various gravity fields and system environments, there is a possibility that crew injury can occur and physiological and functional performance may be comprised.

  2. Impaired laparoscopic performance of novice surgeons due to phone call distraction: a single-centre, prospective study.

    Science.gov (United States)

    Yang, Cui; Heinze, Julia; Helmert, Jens; Weitz, Juergen; Reissfelder, Christoph; Mees, Soeren Torge

    2017-12-01

    Distractions such as phone calls during laparoscopic surgery play an important role in many operating rooms. The aim of this single-centre, prospective study was to assess if laparoscopic performance is impaired by intraoperative phone calls in novice surgeons. From October 2015 to June 2016, 30 novice surgeons (medical students) underwent a laparoscopic surgery training curriculum including two validated tasks (peg transfer, precision cutting) until achieving a defined level of proficiency. For testing, participants were required to perform these tasks under three conditions: no distraction (control) and two standardised distractions in terms of phone calls requiring response (mild and strong distraction). Task performance was evaluated by analysing time and accuracy of the tasks and response of the phone call. In peg transfer (easy task), mild distraction did not worsen the performance significantly, while strong distraction was linked to error and inefficiency with significantly deteriorated performance (P phone call distractions result in impaired laparoscopic performance under certain circumstances. To ensure patient safety, phone calls should be avoided as far as possible in operating rooms.

  3. Impacts on sewer performance due to changes to inputs in domestic wastewater

    OpenAIRE

    Mattsson, Jonathan

    2015-01-01

    The impacts of changes in domestic wastewater inputs on sewer performance have been debated since the dawn of the great sewer construction movement in the 1850s. Nowadays, typical household wastewater that enters sewers can generally be divided into streams from the WC, shower and/or bathtub, kitchen sink, washing machine and dishwasher. Changes in thecomposition of domestic wastewater entering a sewer will depend on inter alia the properties of the appliances used in the households and house...

  4. The gender difference on the Mental Rotations test is not due to performance factors.

    Science.gov (United States)

    Masters, M S

    1998-05-01

    Men score higher than women on the Mental Rotations test (MRT), and the magnitude of this gender difference is the largest of that on any spatial test. Goldstein, Haldane, and Mitchell (1990) reported finding that the gender difference on the MRT disappears when "performance factors" are controlled--specifically, when subjects are allowed sufficient time to attempt all items on the test or when a scoring procedure that controls for the number of items attempted is used. The present experiment also explored whether eliminating these performance factors results in a disappearance of the gender difference on the test. Male and female college students were allowed a short time period or unlimited time on the MRT. The tests were scored according to three different procedures. The results showed no evidence that the gender difference on the MRT was affected by the scoring method or the time limit. Regardless of the scoring procedure, men scored higher than women, and the magnitude of the gender difference persisted undiminished when subjects completed all items on the test. Thus there was no evidence that performance factors produced the gender difference on the MRT. These results are consistent with the results of other investigators who have attempted to replicate Goldstein et al.'s findings.

  5. Modeling and Performance Analysis of Route-Over and Mesh-Under Routing Schemes in 6LoWPAN under Error-Prone Channel Condition

    Directory of Open Access Journals (Sweden)

    Tsung-Han Lee

    2013-01-01

    Full Text Available 6LoWPAN technology has attracted extensive attention recently. It is because 6LoWPAN is one of Internet of Things standard and it adapts to IPv6 protocol stack over low-rate wireless personal area network, such as IEEE 802.15.4. One view is that IP architecture is not suitable for low-rate wireless personal area network. It is a challenge to implement the IPv6 protocol stack into IEEE 802.15.4 devices due to that the size of IPv6 packet is much larger than the maximum packet size of IEEE 802.15.4 in data link layer. In order to solve this problem, 6LoWPAN provides header compression to reduce the transmission overhead for IP packets. In addition, two selected routing schemes, mesh-under and route-over routing schemes, are also proposed in 6LoWPAN to forward IP fragmentations under IEEE 802.15.4 radio link. The distinction is based on which layer of the 6LoWPAN protocol stack is in charge of routing decisions. In route-over routing scheme, the routing distinction is taken at the network layer and, in mesh-under, is taken by the adaptation layer. Thus, the goal of this research is to understand the performance of two routing schemes in 6LoWPAN under error-prone channel condition.

  6. Performance estimation of control rod position indicator due to aging of magnet

    International Nuclear Information System (INIS)

    Yu, Je Yong; Kim, Ji Ho; Huh, Hyung; Choi, Myoung Hwan; Sohn, Dong Seong

    2009-01-01

    The Control Element Drive Mechanism (CEDM) for the integral reactor is designed to raise and lower the control rod in steps of 2mm in order to satisfy the design features of the integral reactor which are the soluble boron free operation and the use of a nuclear heating for the reactor start-up. The actual position of the control rod could be achieved to sense the magnet connected to the control rod by the position indicator around the upper pressure housing of CEDM. It is sufficient that the actual position information of control rod at 20mm interval from the position indicator is used for the core safety analysis. As the magnet moves upward along the position indicator assembly from the bottom to the top in the upper pressure housing, the output voltage increases linearly step-wise at 0.2VDC increments. Between every step there are transient areas which occur by a contact closing of three reed switches which is the 2-3-2 contact closing sequence. In this paper the output voltage signal corresponding to the position of control rod was estimated on the 2-1-2 contact closing sequence due to the aging of the magnet.

  7. Poultry performance in different grazing densities: forage characteristics, losses due to grazing and feed intake

    Directory of Open Access Journals (Sweden)

    Luciano Cristiano França

    2014-02-01

    Full Text Available Morphological characteristics of three forage species grazed by rustic poultry in stocking were evaluated. Coast-cross fodder, kikuyu grass, and stylosanthes were planted in 33-m2 paddocks with two densities (m2/animal: D1 = 3m2/animal and D2 = 1m2/animal. The design was a randomized complete block with a 3 x 2 factorial (three grasses and two densities and three replications. Grass canopy height, grass mass, morphological composition (leaf, stem, and dead material, losses due to grazing, poultry weight gain and consumption, and concentrate feed conversion ratio and efficiency were evaluated. At the end of the experiment, forage and leaves masses were considered low to stylosanthes in D2 (0.28 to 0.03 kg/m2 and to kikuyu grass in D1 (0.13 to 0.05 kg/m2 and in D2 (0.11 and 0.03 kg/m2, respectively. In addition, the grass canopy height was considered low for stylosanthes (6.50 cm that could jeopardize the entry of new poultry lot. The three grass species had similar weight gain and revealed better results for 3m²/ chicken (3.20 kg/animal. Coast-cross fodder, kikuyu grass, and stylosanthes, with some exceptions, can be considered suitable for grazing fattening poultry at 3m2/animal at the evaluated time of the year (autumn.

  8. Risk of Adverse Health Outcomes and Decrements in Performance Due to In-flight Medical Conditions

    Science.gov (United States)

    Antonsen,Erik

    2017-01-01

    The drive to undertake long-duration space exploration missions at greater distances from Earth gives rise to many challenges concerning human performance under extreme conditions. At NASA, the Human Research Program (HRP) has been established to investigate the specific risks to astronaut health and performance presented by space exploration, in addition to developing necessary countermeasures and technology to reduce risk and facilitate safer, more productive missions in space (NASA Human Research Program 2009). The HRP is divided into five subsections, covering behavioral health, space radiation, habitability, and other areas of interest. Within this structure is the ExMC Element, whose research contributes to the overall development of new technologies to overcome the challenges of expanding human exploration and habitation of space. The risk statement provided by the HRP to the ExMC Element states: "Given that medical conditions/events will occur during human spaceflight missions, there is a possibility of adverse health outcomes and decrements in performance in mission and for long term health" (NASA Human Research Program 2016). Within this risk context, the Exploration Medical Capabilities (ExMC) Element is specifically concerned with establishing evidenced-based methods of monitoring and maintaining astronaut health. Essential to completing this task is the advancement in techniques that identify, prevent, and treat any health threats that may occur during space missions. The ultimate goal of the ExMC Element is to develop and demonstrate a pathway for medical system integration into vehicle and mission design to mitigate the risk of medical issues. Integral to this effort is inclusion of an evidence-based medical and data handling system appropriate for long-duration, exploration-class missions. This requires a clear Concept of Operations, quantitative risk metrics or other tools to address changing risk throughout a mission, and system scoping and system

  9. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  10. Thermal and Energy Performance of Conditioned Building Due To Insulated Sloped Roof

    Science.gov (United States)

    Irwan, Suhandi Syiful; Ahmed, Azni Zain; Zakaria, Nor Zaini; Ibrahim, Norhati

    2010-07-01

    For low-rise buildings in equatorial region, the roof is exposed to solar radiation longer than other parts of the envelope. Roofs are to be designed to reject heat and moderate the thermal impact. These are determined by the design and construction of the roofing system. The pitch of roof and the properties of construction affect the heat gain into the attic and subsequently the indoor temperature of the living spaces underneath. This finally influences the thermal comfort conditions of naturally ventilated buildings and cooling load of conditioned buildings. This study investigated the effect of insulated sloping roof on thermal energy performance of the building. A whole-building thermal energy computer simulation tool, Integrated Environmental Solution (IES), was used for the modelling and analyses. A building model with dimension of 4.0 m × 4.0 m × 3.0 m was designed with insulated roof and conventional construction for other parts of the envelope. A 75 mm conductive insulation material with thermal conductivity (k-value) of 0.034 Wm-1K-1 was installed underneath the roof tiles. The building was modelled with roof pitch angles of 0° , 15°, 30°, 45°, 60° and simulated for the month of August in Malaysian climate conditions. The profile for attic temperature, indoor temperature and cooling load were downloaded and evaluated. The optimum roof pitch angle for best thermal performance and energy saving was identified. The results show the pitch angle of 0° is able to mitigate the thermal impact to provide the best thermal condition with optimum energy savings. The maximum temperature difference between insulated and non-insulted roof for attic (AtticA-B) and indoor condition (IndoorA-B) is +7.8 °C and 0.4 °C respectively with an average energy monthly savings of 3.9 %.

  11. Effect of fatigue on landing performance assessed with the landing error scoring system (less) in patients after ACL reconstruction. A pilot study

    NARCIS (Netherlands)

    Gokeler, A; Eppinga, P; Dijkstra, P U; Welling, Wouter; Padua, D A; Otten, E.; Benjaminse, A

    BACKGROUND: Fatigue has been shown to affect performance of hop tests in patients after anterior cruciate ligament reconstruction (ACLR) compared to uninjured controls (CTRL). This may render the hop test less sensitive in detecting landing errors. The primary purpose of this study was to

  12. Bit-error-rate performance analysis of self-heterodyne detected radio-over-fiber links using phase and intensity modulation

    DEFF Research Database (Denmark)

    Yin, Xiaoli; Yu, Xianbin; Tafur Monroy, Idelfonso

    2010-01-01

    We theoretically and experimentally investigate the performance of two self-heterodyne detected radio-over-fiber (RoF) links employing phase modulation (PM) and quadrature biased intensity modulation (IM), in term of bit-error-rate (BER) and optical signal-to-noise-ratio (OSNR). In both links, self...

  13. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  14. Asymptotic Performance Analysis of Two-Way Relaying FSO Networks with Nonzero Boresight Pointing Errors Over Double-Generalized Gamma Fading Channels

    KAUST Repository

    Yang, Liang; Alouini, Mohamed-Slim; Ansari, Imran Shafique

    2018-01-01

    In this correspondence, an asymptotic performance analysis for two-way relaying free-space optical (FSO) communication systems with nonzero boresight pointing errors over double-generalized gamma fading channels is presented. Assuming amplify-and-forward (AF) relaying, two nodes having the FSO ability can communicate with each other through the optical links. With this setup, an approximate cumulative distribution function (CDF) expression for the overall signal-to-noise ratio (SNR) is presented. With this statistic distribution, we derive the asymptotic analytical results for the outage probability and average bit error rate. Furthermore, we provide the asymptotic average capacity analysis for high SNR by using the momentsbased method.

  15. ATM QoS Experiments Using TCP Applications: Performance of TCP/IP Over ATM in a Variety of Errored Links

    Science.gov (United States)

    Frantz, Brian D.; Ivancic, William D.

    2001-01-01

    Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.

  16. Asymptotic Performance Analysis of Two-Way Relaying FSO Networks with Nonzero Boresight Pointing Errors Over Double-Generalized Gamma Fading Channels

    KAUST Repository

    Yang, Liang

    2018-05-07

    In this correspondence, an asymptotic performance analysis for two-way relaying free-space optical (FSO) communication systems with nonzero boresight pointing errors over double-generalized gamma fading channels is presented. Assuming amplify-and-forward (AF) relaying, two nodes having the FSO ability can communicate with each other through the optical links. With this setup, an approximate cumulative distribution function (CDF) expression for the overall signal-to-noise ratio (SNR) is presented. With this statistic distribution, we derive the asymptotic analytical results for the outage probability and average bit error rate. Furthermore, we provide the asymptotic average capacity analysis for high SNR by using the momentsbased method.

  17. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    Science.gov (United States)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  18. Error and Performance Analysis of MEMS-based Inertial Sensors with a Low-cost GPS Receiver

    Directory of Open Access Journals (Sweden)

    Yang Gao

    2008-03-01

    Full Text Available Global Navigation Satellite Systems (GNSS, such as the Global Positioning System (GPS, have been widely utilized and their applications are becoming popular, not only in military or commercial applications, but also for everyday life. Although GPS measurements are the essential information for currently developed land vehicle navigation systems (LVNS, GPS signals are often unavailable or unreliable due to signal blockages under certain environments such as urban canyons. This situation must be compensated in order to provide continuous navigation solutions. To overcome the problems of unavailability and unreliability using GPS and to be cost and size effective as well, Micro Electro Mechanical Systems (MEMS based inertial sensor technology has been pushing for the development of low-cost integrated navigation systems for land vehicle navigation and guidance applications. This paper will analyze the characterization of MEMS based inertial sensors and the performance of an integrated system prototype of MEMS based inertial sensors, a low-cost GPS receiver and a digital compass. The influence of the stochastic variation of sensors will be assessed and modeled by two different methods, namely Gauss-Markov (GM and AutoRegressive (AR models, with GPS signal blockage of different lengths. Numerical results from kinematic testing have been used to assess the performance of different modeling schemes.

  19. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  20. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  1. Selection of the important performance influencing factors for the assessment of human error under accident management situations in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, J. H.; Jung, W. J.

    1999-01-01

    This paper introduces the process and final results of selection of the important Performance Influencing Factors (PIFs) under emergency operation and accident management situations in nuclear power plants for use in the assessment of human errors. We collected two types of PIF taxonomies, one is the full set PIF list mainly developed for human error analysis, and the other is the PIFs for human reliability analysis (HRA) in probabilistic safety assessment (PSA). 5 PIF taxonomies among the full set PIF list and 10 PIF taxonomies among HRA methodologies (CREAM, SLIM, INTENT, were collected in this research. By reviewing and analyzing PIFs selected for HRA methodologies, the criterion could be established for the selection of appropriate PIFs under emergency operation and accident management situations. Based on this selection criteria, a new PIF taxonomy was proposed for the assessment of human error under emergency operation and accident management situations in nuclear power plants

  2. Performance analysis of relay-assisted all-optical FSO networks over strong atmospheric turbulence channels with pointing errors

    KAUST Repository

    Yang, Liang

    2014-12-01

    In this study, we consider a relay-assisted free-space optical communication scheme over strong atmospheric turbulence channels with misalignment-induced pointing errors. The links from the source to the destination are assumed to be all-optical links. Assuming a variable gain relay with amplify-and-forward protocol, the electrical signal at the source is forwarded to the destination with the help of this relay through all-optical links. More specifically, we first present a cumulative density function (CDF) analysis for the end-to-end signal-to-noise ratio. Based on this CDF, the outage probability, bit-error rate, and average capacity of our proposed system are derived. Results show that the system diversity order is related to the minimum value of the channel parameters.

  3. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  4. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    Science.gov (United States)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  5. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    Science.gov (United States)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  6. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  7. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  8. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  9. Elucidating the Performance Limitations of Lithium-ion Batteries due to Species and Charge Transport through Five Characteristic Parameters

    Science.gov (United States)

    Jiang, Fangming; Peng, Peng

    2016-01-01

    Underutilization due to performance limitations imposed by species and charge transports is one of the key issues that persist with various lithium-ion batteries. To elucidate the relevant mechanisms, two groups of characteristic parameters were proposed. The first group contains three characteristic time parameters, namely: (1) te, which characterizes the Li-ion transport rate in the electrolyte phase, (2) ts, characterizing the lithium diffusion rate in the solid active materials, and (3) tc, describing the local Li-ion depletion rate in electrolyte phase at the electrolyte/electrode interface due to electrochemical reactions. The second group contains two electric resistance parameters: Re and Rs, which represent respectively, the equivalent ionic transport resistance and the effective electronic transport resistance in the electrode. Electrochemical modeling and simulations to the discharge process of LiCoO2 cells reveal that: (1) if te, ts and tc are on the same order of magnitude, the species transports may not cause any performance limitations to the battery; (2) the underlying mechanisms of performance limitations due to thick electrode, high-rate operation, and large-sized active material particles as well as effects of charge transports are revealed. The findings may be used as quantitative guidelines in the development and design of more advanced Li-ion batteries. PMID:27599870

  10. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  11. Effect of dewatering on seismic performance of multi-anchor wall due to high ground water level

    Science.gov (United States)

    Kobayashi, Makoto; Miura, Kinya; Konami, Takeharu; Hayashi, Taketo; Sato, Hiroki

    2017-10-01

    Previous research reported that the ground water in the backfill of reinforced soil wall made it deteriorate. According to the damage investigation of Great East Earthquake 2011, the reinforced soil structure due to high ground water level by seismic wave were deformed remarkably. Some of them classified ultimate limit state or restorability limit state. However, more than 90% of reinforced soil structure, which suffered from this earthquake, were classified into no damage condition. Therefore, it is necessary that the seismic behaviors of multi-anchor wall due to seepage flow should be clarified in order to adopt the performance-based design in such reinforced soil structure. In this study, a series of centrifugal shaking table tests were conducted to investigate the seismic behavior of multi-anchor wall due to high ground water level. The reinforced drainage pipes were installed into the backfill in order to verify the dewatering effect and additional reinforcement. Furthermore, to check only the dewatering effect, the model tests was carried out with several ground water table that was modeled the case reinforced drainage pipes installed. The test results show unique behavior of reinforced region that moved integrally. This implies that the reinforced region has been behaved as if it became one mass, and this behavior make this structure increase seismic performance. Thus, the effectiveness of dewatering was observed remarkably because of decreasing the inertial force during earthquake.

  12. Human errors and work performance in a nuclear power plant control room: associations with work-related factors and behavioral coping

    International Nuclear Information System (INIS)

    Kecklund, Lena Jacobsson; Svenson, Ola

    1997-01-01

    The present study investigated the relationships between the operator's appraisal of his own work situation and the quality of his own work performance as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. The decreased level of satisfaction with work result during outage is a fact despite the lowering of aspiration level for work performance quality during outage. In order to decrease relative frequencies for minor errors, special attention should be given to reduce work demands, such as time pressure and memory demands. In order to decrease misinterpretation errors special attention should be given to organizational factors such as planning and shift turnovers in addition to training. In summary, the outage period seems to be a significantly more vulnerable window in the management of a nuclear power plant than the normal power production state. Thus, an increased focus on the outage period and human factors issues, addressing the synergetic effects or work demands, organizational factors and coping resources is an important area for improvement of

  13. A study on the flow field and local heat transfer performance due to geometric scaling of centrifugal fans

    International Nuclear Information System (INIS)

    Stafford, Jason; Walsh, Ed; Egan, Vanessa

    2011-01-01

    Highlights: ► Velocity field and local heat transfer trends of centrifugal fans. ► Time-averaged vortices are generated by flow separation. ► Local vortex and impingement regions are evident on surface heat transfer maps. ► Miniature centrifugal fans should be designed with an aspect ratio below 0.3. ► Theory under predicts heat transfer due to complex, unsteady outlet flow. - Abstract: Scaled versions of fan designs are often chosen to address thermal management issues in space constrained applications. Using velocity field and local heat transfer measurement techniques, the thermal performance characteristics of a range of geometrically scaled centrifugal fan designs have been investigated. Complex fluid flow structures and surface heat transfer trends due to centrifugal fans were found to be common over a wide range of fan aspect ratios (blade height to fan diameter). The limiting aspect ratio for heat transfer enhancement was 0.3, as larger aspect ratios were shown to result in a reduction in overall thermal performance. Over the range of fans examined, the low profile centrifugal designs produced significant enhancement in thermal performance when compared to that predicted using classical laminar flow theory. The limiting non-dimensional distance from the fan, where this enhancement is no longer apparent, has also been determined. Using the fundamental information inferred from local velocity field and heat transfer measurements, selection criteria can be determined for both low and high power practical applications where space restrictions exist.

  14. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  16. An fMRI study on variation of visuospatial cognitive performance of young male due to highly concentrated oxygen administration

    Science.gov (United States)

    Chung, Soon Cheol; Kim, Ik Hyeon; Tack, Gye Rae; Sohn, Jin Hun

    2004-04-01

    This study investigated the effects of 30% oxygen administration on the visuospatial cognitive performance using fMRI. Eight college students (right-handed, average age 23.5) were selected as subjects for this study. Oxygen supply equipment which gives 21% and 30% oxygen at a constant rate of 8L/min was developed for this study. To measure the performance of visuospatial cognition, two questionnaires with similar difficulty containing 20 questions each were also developed. Experiment was designed as two runs: run for visuospatial cognition test with normal air (21% of oxygen) and run for visuospatial cognition test with highly concentrated air (30% of oxygen). Run consists of 4 blocks and each block has 8 control problems and 5 visuospatial problems. Functional brain images were taken from 3T MRI using single-shot EPI method. Activities of neural network due to performing visuospatial cognition test were identified using subtraction procedure, and activation areas while performing visuospatial cognition test were extracted using double subtraction procedure. Activities were observed at occipital lobe, parietal lobe, and frontal lobe when performing visuospatial cognition test following both 21% and 30% oxygen administration. But in case of only 30% oxygen administration there were more activities at left precuneus, left cuneus, right postcentral gyrus, bilateral middle frontal gyri, right inferior frontal gyrus, left superior frontal gyrus, bilateral uvula, bilateral pyramis, and nodule compared with 21% oxygen administration. From results of visuospatial cognition test, accuracy rate increased in case of 30% oxygen administration. Thus it could be concluded that highly concentrated oxygen administration has positive effects on the visuospatial cognitive performance.

  17. A study on the flow field and local heat transfer performance due to geometric scaling of centrifugal fans

    Energy Technology Data Exchange (ETDEWEB)

    Stafford, Jason, E-mail: jason.stafford@ul.ie [Stokes Institute, Mechanical, Aeronautical and Biomedical Engineering Department, University of Limerick, Limerick (Ireland); Walsh, Ed; Egan, Vanessa [Stokes Institute, Mechanical, Aeronautical and Biomedical Engineering Department, University of Limerick, Limerick (Ireland)

    2011-12-15

    Highlights: Black-Right-Pointing-Pointer Velocity field and local heat transfer trends of centrifugal fans. Black-Right-Pointing-Pointer Time-averaged vortices are generated by flow separation. Black-Right-Pointing-Pointer Local vortex and impingement regions are evident on surface heat transfer maps. Black-Right-Pointing-Pointer Miniature centrifugal fans should be designed with an aspect ratio below 0.3. Black-Right-Pointing-Pointer Theory under predicts heat transfer due to complex, unsteady outlet flow. - Abstract: Scaled versions of fan designs are often chosen to address thermal management issues in space constrained applications. Using velocity field and local heat transfer measurement techniques, the thermal performance characteristics of a range of geometrically scaled centrifugal fan designs have been investigated. Complex fluid flow structures and surface heat transfer trends due to centrifugal fans were found to be common over a wide range of fan aspect ratios (blade height to fan diameter). The limiting aspect ratio for heat transfer enhancement was 0.3, as larger aspect ratios were shown to result in a reduction in overall thermal performance. Over the range of fans examined, the low profile centrifugal designs produced significant enhancement in thermal performance when compared to that predicted using classical laminar flow theory. The limiting non-dimensional distance from the fan, where this enhancement is no longer apparent, has also been determined. Using the fundamental information inferred from local velocity field and heat transfer measurements, selection criteria can be determined for both low and high power practical applications where space restrictions exist.

  18. Modeling the Deterioration of Engine and Low Pressure Compressor Performance During a Roll Back Event Due to Ice Accretion

    Science.gov (United States)

    Veres, Joseph P.; Jorgenson, Philip, C. E.; Jones, Scott M.

    2014-01-01

    The main focus of this study is to apply a computational tool for the flow analysis of the engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) of NASA Glenn Research Center. A data point was selected for analysis during which the engine experienced a full roll back event due to the ice accretion on the blades and flow path of the low pressure compressor. The computational tool consists of the Numerical Propulsion System Simulation (NPSS) engine system thermodynamic cycle code, and an Euler-based compressor flow analysis code, that has an ice particle melt estimation code with the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Decreasing the performance characteristics of the low pressure compressor (LPC) within the NPSS cycle analysis resulted in matching the overall engine performance parameters measured during testing at data points in short time intervals through the progression of the roll back event. Detailed analysis of the fan-core and LPC with the compressor flow analysis code simulated the effects of ice accretion by increasing the aerodynamic blockage and pressure losses through the low pressure compressor until achieving a match with the NPSS cycle analysis results, at each scan. With the additional blockages and losses in the LPC, the compressor flow analysis code results were able to numerically reproduce the performance that was determined by the NPSS cycle analysis, which was in agreement with the PSL engine test data. The compressor flow analysis indicated that the blockage due to ice accretion in the LPC exit guide vane stators caused the exit guide vane (EGV) to be nearly choked, significantly reducing the air flow rate into the core. This caused the LPC to eventually be in stall due to increasing levels of diffusion in the rotors and high incidence angles in the inlet guide vane (IGV) and EGV stators. The flow analysis indicating

  19. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  20. Evaluation of the tumor registration error in biopsy procedures performed under real-time PET/CT guidance.

    Science.gov (United States)

    Fanchon, Louise M; Apte, Adytia; Schmidtlein, C Ross; Yorke, Ellen; Hu, Yu-Chi; Dogan, Snjezana; Hatt, Mathieu; Visvikis, Dimitris; Humm, John L; Solomon, Stephen B; Kirov, Assen S

    2017-10-01

    The purpose of this study is to quantify tumor displacement during real-time PET/CT guided biopsy and to investigate correlations between tumor displacement and false-negative results. 19 patients who underwent real-time 18 F-FDG PET-guided biopsy and were found positive for malignancy were included in this study under IRB approval. PET/CT images were acquired for all patients within minutes prior to biopsy to visualize the FDG-avid region and plan the needle insertion. The biopsy needle was inserted and a post-insertion CT scan was acquired. The two CT scans acquired before and after needle insertion were registered using a deformable image registration (DIR) algorithm. The DIR deformation vector field (DVF) was used to calculate the mean displacement between the pre-insertion and post-insertion CT scans for a region around the tip of the biopsy needle. For 12 patients one biopsy core from each was tracked during histopathological testing to investigate correlations of the mean displacement between the two CT scans and false-negative or true-positive biopsy results. For 11 patients, two PET scans were acquired; one at the beginning of the procedure, pre-needle insertion, and an additional one with the needle in place. The pre-insertion PET scan was corrected for intraprocedural motion by applying the DVF. The corrected PET was compared with the post-needle insertion PET to validate the correction method. The mean displacement of tissue around the needle between the pre-biopsy CT and the postneedle insertion CT was 5.1 mm (min = 1.1 mm, max = 10.9 mm and SD = 3.0 mm). For mean displacements larger than 7.2 mm, the biopsy cores gave false-negative results. Correcting pre-biopsy PET using the DVF improved the PET/CT registration in 8 of 11 cases. The DVF obtained from DIR of the CT scans can be used for evaluation and correction of the error in needle placement with respect to the FDG-avid area. Misregistration between the pre-biopsy PET and the CT acquired with the

  1. A Unified Performance Analysis of Free-Space Optical Links over Gamma-Gamma Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim; Yilmaz, Ferkan

    2013-01-01

    transmission system, all in terms of the Meijer's G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics

  2. From Metalinguistic Instruction to Metalinguistic Knowledge, and from Metalinguistic Knowledge to Performance in Error Correction and Oral Production Tasks

    Science.gov (United States)

    Serrano, Raquel

    2011-01-01

    The purpose of this study is to analyse the effect of metalinguistic instruction on students' metalinguistic knowledge on the one hand, and on students' performance in metalinguistic and oral production tasks on the other hand. Two groups of primary school students learning English as a foreign language were chosen. One of them (Rule group) was…

  3. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  4. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna; Chelli, Ali; Alouini, Mohamed-Slim

    2014-01-01

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer's G functions. Moreover, we present asymptotic expressions by utilizing the Meijer's G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  5. On the Performance Analysis of Hybrid ARQ With Incremental Redundancy and With Code Combining Over Free-Space Optical Channels With Pointing Errors

    KAUST Repository

    Zedini, Emna

    2014-07-16

    In this paper, we investigate the performance of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) and with code combining (CC) from an information-theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce new closed-form expressions for the probability density function, the cumulative distribution function, the moment generating function, and the moments of an FSO link modeled by the Gamma fading channel subject to pointing errors and using intensity modulation with direct detection technique at the receiver. Based on these formulas, we derive exact results for the average bit error rate and the capacity in terms of Meijer\\'s G functions. Moreover, we present asymptotic expressions by utilizing the Meijer\\'s G function expansion and using the moments method, too, for the ergodic capacity approximations. Then, we provide novel analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. Besides, we offer asymptotic expressions for these results in terms of simple elementary functions. Additionally, we compare the performance of HARQ with IR and HARQ with CC. Our analysis demonstrates that HARQ with IR outperforms HARQ with CC.

  6. Human errors and work performance in a nuclear power plant control room: associations with work-related factors and behavioral coping

    International Nuclear Information System (INIS)

    Kecklund, L.J.; Svenson, O.

    1997-01-01

    The present study investigated the relationships between the operator's appraisal of his own work situation and the quality of his own work performance, as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality, and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. (Author)

  7. Anemia and performance status as prognostic markers in acute hypercapnic respiratory failure due to chronic obstructive pulmonary disease

    Directory of Open Access Journals (Sweden)

    Haja Mydin H

    2013-03-01

    Full Text Available Helmy Haja Mydin, Stephen Murphy, Howell Clague, Kishore Sridharan, Ian K TaylorDepartment of Respiratory Medicine, Sunderland Royal Infirmary, Sunderland, United KingdomBackground: In patients with acute hypercapnic respiratory failure (AHRF during exacerbations of COPD, mortality can be high despite noninvasive ventilation (NIV. For some, AHRF is terminal and NIV is inappropriate. However there is no definitive method of identifying patients who are unlikely to survive. The aim of this study was to identify factors associated with inpatient mortality from AHRF with respiratory acidosis due to COPD.Methods: COPD patients presenting with AHRF and who were treated with NIV were studied prospectively. The forced expiratory volume in 1 second (FEV1, World Health Organization performance status (WHO-PS, clinical observations, a composite physiological score (Early Warning Score, routine hematology and biochemistry, and arterial blood gases prior to commencing NIV, were recorded.Results: In total, 65 patients were included for study, 29 males and 36 females, with a mean age of 71 ± 10.5 years. Inpatient mortality in the group was 33.8%. Mortality at 30 days and 12 months after admission were 38.5% and 58.5%, respectively. On univariate analysis, the variables associated with inpatient death were: WHO-PS ≥ 3, long-term oxygen therapy, anemia, diastolic blood pressure < 70 mmHg, Early Warning Score ≥ 3, severe acidosis (pH < 7.20, and serum albumin < 35 g/L. On multivariate analysis, only anemia and WHO-PS ≥ 3 were significant. The presence of both predicted 68% of inpatient deaths, with a specificity of 98%.Conclusion: WHO-PS ≥ 3 and anemia are prognostic factors in AHRF with respiratory acidosis due to COPD. A combination of the two provides a simple method of identifying patients unlikely to benefit from NIV.Keywords: acute exacerbations of COPD, noninvasive ventilation, emphysema, prognostic markers

  8. The sensitivity of bit error rate (BER) performance in multi-carrier (OFDM) and single-carrier

    Science.gov (United States)

    Albdran, Saleh; Alshammari, Ahmed; Matin, Mohammad

    2012-10-01

    Recently, the single-carrier and multi-carrier transmissions have grabbed the attention of industrial systems. Theoretically, OFDM as a Multicarrier has more advantages over the Single-Carrier especially for high data rate. In this paper we will show which one of the two techniques outperforms the other. We will study and compare the performance of BER for both techniques for a given channel. As a function of signal to noise ratio SNR, the BER will be measure and studied. Also, Peak-to-Average Power Ratio (PAPR) is going to be examined and presented as a drawback of using OFDM. To make a reasonable comparison between the both techniques, we will use additive white Gaussian noise (AWGN) as a communication channel.

  9. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  10. Frequency shift due to blackbody radiation in a cesium atomic fountain and improvement of the clock performances

    International Nuclear Information System (INIS)

    Zhang, S.

    2004-07-01

    FO1 was the first caesium fountain primary frequency standard in the world. The most recent evaluation in 2002 before improvement reached an accuracy of 1*10 -15 when operated with optical molasses. Working as an extremely precise and stable instrument, FO1 has contributed to fundamental physics and technical measurements: - Frequency comparison between Cs and Rb fountains over an interval of 5 years sets an upper limit for a possible variation of the fine structure constant as |alpha/alpha| -15 /y. The resolution is about 5 times better than the previous test in our laboratory. The projected accuracy of the space clock PHARAO is 1*10 -16 . We confirmed its Ramsey cavity performance by testing the phase difference between the two interaction zones in FO1. The measured temperature T dependent frequency shift of the Cs clock induced by the blackbody radiation field is given as nu(T)=154(6)*10 -6 *(T/300) 4 [1+ε(T/300) 2 ] Hz with the theoretical value ε = 0,014. The obtained accuracy represents a 3 times improvement over the previous measurement by the PTB group. Some improvements have been carried out on FO1. The new FO1 version works directly with optical molasses loaded by a laser slowed atomic beam. The application of the adiabatic passage method to perform the state selection allows us to determine the atom number dependent frequency shifts due to the cold collision and cavity pulling effects at a level of of 10 -16 . Recently, the obtained frequency stability is 2,8*10 -14 *τ -1/2 for about 4*10 6 detected atoms. The accuracy is currently under evaluation, the expected value is a few times 10 -16 . (author)

  11. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  12. Canine total knee replacement performed due to osteoarthritis subsequent to distal femur fracture osteosynthesis: two-year objective outcome.

    Science.gov (United States)

    Eskelinen, E V; Liska, W D; Hyytiäinen, H K; Hielm-Björkman, A

    2012-01-01

    A 27-kg German Shorthaired Pointer was referred for evaluation due to the complaint of left pelvic limb lameness and signs of pain in the left stifle joint. Radiographs revealed signs of a healed supracondylar femoral fracture that had been previously repaired at another hospital with an intramedullary pin and two cross pins. In addition, there were signs of severe osteoarthritis (OA). The OA had been managed medically with administration of carprofen and nutraceuticals for nine months without any improvement. Left total knee replacement (TKR) surgery was performed to alleviate signs of pain. The patient was assessed preoperatively and at six months, one year, and two years after surgery using radiology, force platform analysis of gait, thigh circumference measures, goniometry, and lameness evaluation. Following surgery, the dog resumed normal activity without any signs of pain and a good quality of life at 3.5 months. Force plate analysis found that peak vertical force on the TKR limb was 85.7% of the normal contralateral limb after two years. Total knee replacement was a successful treatment to manage knee OA associated with a healed distal femoral fracture and internal fixation in this dog.

  13. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  14. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  15. User Performance Evaluation of Four Blood Glucose Monitoring Systems Applying ISO 15197:2013 Accuracy Criteria and Calculation of Insulin Dosing Errors.

    Science.gov (United States)

    Freckmann, Guido; Jendrike, Nina; Baumstark, Annette; Pleus, Stefan; Liebing, Christina; Haug, Cornelia

    2018-04-01

    The international standard ISO 15197:2013 requires a user performance evaluation to assess if intended users are able to obtain accurate blood glucose measurement results with a self-monitoring of blood glucose (SMBG) system. In this study, user performance was evaluated for four SMBG systems on the basis of ISO 15197:2013, and possibly related insulin dosing errors were calculated. Additionally, accuracy was assessed in the hands of study personnel. Accu-Chek ® Performa Connect (A), Contour ® plus ONE (B), FreeStyle Optium Neo (C), and OneTouch Select ® Plus (D) were evaluated with one test strip lot. After familiarization with the systems, subjects collected a capillary blood sample and performed an SMBG measurement. Study personnel observed the subjects' measurement technique. Then, study personnel performed SMBG measurements and comparison measurements. Number and percentage of SMBG measurements within ± 15 mg/dl and ± 15% of the comparison measurements at glucose concentrations performed by lay-users. The study was registered at ClinicalTrials.gov (NCT02916576). Ascensia Diabetes Care Deutschland GmbH.

  16. Assessment on tracking error performance of Cascade P/PI, NPID and N-Cascade controller for precise positioning of xy table ballscrew drive system

    International Nuclear Information System (INIS)

    Abdullah, L; Jamaludin, Z; Rafan, N A; Jamaludin, J; Chiew, T H

    2013-01-01

    At present, positioning plants in machine tools are looking for high degree of accuracy and robustness attributes for the purpose of compensating various disturbance forces. The objective of this paper is to assess the tracking performance of Cascade P/PI, Nonlinear PID (NPID) and Nonlinear cascade (N-Cascade) controller with the existence of disturbance forces in the form of cutting forces. Cutting force characteristics at different cutting parameters; such as spindle speed rotations is analysed using Fast Fourier Transform. The tracking performance of a Nonlinear cascade controller in presence of these cutting forces is compared with NPID controller and Cascade P/PI controller. Robustness of these controllers in compensating different cutting characteristics is compared based on reduction in the amplitudes of cutting force harmonics using Fast Fourier Transform. It is found that the N-cascade controller performs better than both NPID controller and Cascade P/PI controller. The average percentage error reduction between N-cascade controller and Cascade P/PI controller is about 65% whereas the average percentage error reduction between cascade controller and NPID controller is about 82% at spindle speed of 3000 rpm spindle speed rotation. The finalized design of N-cascade controller could be utilized further for machining application such as milling process. The implementation of N-cascade in machine tools applications will increase the quality of the end product and the productivity in industry by saving the machining time. It is suggested that the range of the spindle speed could be made wider to accommodate the needs for high speed machining

  17. ASD Is Not DLI: Individuals With Autism and Individuals With Syntactic DLI Show Similar Performance Level in Syntactic Tasks, but Different Error Patterns.

    Science.gov (United States)

    Sukenik, Nufar; Friedmann, Naama

    2018-01-01

    Do individuals with autism have a developmental syntactic impairment, DLI (formerly known as SLI)? In this study we directly compared the performance of 18 individuals with Autism Spectrum Disorder (ASD) aged 9;0-18;0 years with that of 93 individuals with Syntactic-Developmental Language Impairment (SyDLI) aged 8;8-14;6 (and with 166 typically-developing children aged 5;2-18;1). We tested them using three syntactic tests assessing the comprehension and production of syntactic structures that are known to be sensitive to syntactic impairment: elicitation of subject and object relative clauses, reading and paraphrasing of object relatives, and repetition of complex syntactic structures including Wh questions, relative clauses, topicalized sentences, sentences with verb movement, sentences with A-movement, and embedded sentences. The results were consistent across the three tasks: the overall rate of correct performance on the syntactic tasks is similar for the children with ASD and those with SyDLI. However, once we look closer, they are very different. The types of errors of the ASD group differ from those of the SyDLI group-the children with ASD provide various types of pragmatically infelicitous responses that are not evinced in the SyDLI or in the age equivalent typically-developing groups. The two groups (ASD and SyDLI) also differ in the pattern of performance-the children with SyDLI show a syntactically-principled pattern of impairment, with selective difficulty in specific sentence types (such as sentences derived by movement of the object across the subject), and normal performance on other structures (such as simple sentences). In contrast, the ASD participants showed generalized low performance on the various sentence structures. Syntactic performance was far from consistent within the ASD group. Whereas all ASD participants had errors that can originate in pragmatic/discourse difficulties, seven of them had completely normal syntax in the structures we

  18. ASD Is Not DLI: Individuals With Autism and Individuals With Syntactic DLI Show Similar Performance Level in Syntactic Tasks, but Different Error Patterns

    Directory of Open Access Journals (Sweden)

    Nufar Sukenik

    2018-04-01

    Full Text Available Do individuals with autism have a developmental syntactic impairment, DLI (formerly known as SLI? In this study we directly compared the performance of 18 individuals with Autism Spectrum Disorder (ASD aged 9;0–18;0 years with that of 93 individuals with Syntactic-Developmental Language Impairment (SyDLI aged 8;8–14;6 (and with 166 typically-developing children aged 5;2–18;1. We tested them using three syntactic tests assessing the comprehension and production of syntactic structures that are known to be sensitive to syntactic impairment: elicitation of subject and object relative clauses, reading and paraphrasing of object relatives, and repetition of complex syntactic structures including Wh questions, relative clauses, topicalized sentences, sentences with verb movement, sentences with A-movement, and embedded sentences. The results were consistent across the three tasks: the overall rate of correct performance on the syntactic tasks is similar for the children with ASD and those with SyDLI. However, once we look closer, they are very different. The types of errors of the ASD group differ from those of the SyDLI group—the children with ASD provide various types of pragmatically infelicitous responses that are not evinced in the SyDLI or in the age equivalent typically-developing groups. The two groups (ASD and SyDLI also differ in the pattern of performance—the children with SyDLI show a syntactically-principled pattern of impairment, with selective difficulty in specific sentence types (such as sentences derived by movement of the object across the subject, and normal performance on other structures (such as simple sentences. In contrast, the ASD participants showed generalized low performance on the various sentence structures. Syntactic performance was far from consistent within the ASD group. Whereas all ASD participants had errors that can originate in pragmatic/discourse difficulties, seven of them had completely normal syntax

  19. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  20. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  1. Bit Error Rate Performance of a MIMO-CDMA System Employing Parity-Bit-Selected Spreading in Frequency Nonselective Rayleigh Fading

    Directory of Open Access Journals (Sweden)

    Claude D'Amours

    2011-01-01

    Full Text Available We analytically derive the upper bound for the bit error rate (BER performance of a single user multiple input multiple output code division multiple access (MIMO-CDMA system employing parity-bit-selected spreading in slowly varying, flat Rayleigh fading. The analysis is done for spatially uncorrelated links. The analysis presented demonstrates that parity-bit-selected spreading provides an asymptotic gain of 10log(Nt dB over conventional MIMO-CDMA when the receiver has perfect channel estimates. This analytical result concurs with previous works where the (BER is determined by simulation methods and provides insight into why the different techniques provide improvement over conventional MIMO-CDMA systems.

  2. Towards an Integrated Workload Control (WLC) Concept : The Performance of Due Date Setting Rules in Job Shops with Contingent Orders

    NARCIS (Netherlands)

    Thuerer, Matthias; Stevenson, Mark; Silva, Cristovao; Land, Martin

    2013-01-01

    Workload control (WLC) is a production planning and control concept developed for make-to-order companies. Its customer enquiry management methodology supports due date setting, while its order release mechanism determines when to start production. For make-to-order companies, due date setting is a

  3. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  4. On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model

    KAUST Repository

    Al-Quwaiee, Hessa; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2016-01-01

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we

  5. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa; Yang, Hong-Chuan; Alouini, Mohamed-Slim

    2015-01-01

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we

  6. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  7. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  8. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  9. WE-AB-BRA-04: Evaluation of the Tumor Registration Error in Biopsy Procedures Performed Under Real Time PET/CT Guidance

    International Nuclear Information System (INIS)

    Fanchon, L; Apte, A; Dzyubak, O; Mageras, G; Yorke, E; Solomon, S; Kirov, A; Visvikis, D; Hatt, M

    2015-01-01

    Purpose: PET/CT guidance is used for biopsies of metabolically active lesions, which are not well seen on CT alone or to target the metabolically active tissue in tumor ablations. It has also been shown that PET/CT guided biopsies provide an opportunity to verify the location of the lesion border at the place of needle insertion. However the error in needle placement with respect to the metabolically active region may be affected by motion between the PET/CT scan performed at the start of the procedure and the CT scan performed with the needle in place and this error has not been previously quantified. Methods: Specimens from 31 PET/CT guided biopsies were investigated and correlated to the intraoperative PET scan under an IRB approved HIPAA compliant protocol. For 4 of the cases in which larger motion was suspected a second PET scan was obtained with the needle in place. The CT and the PET images obtained before and after the needle insertion were used to calculate the displacement of the voxels along the needle path. CTpost was registered to CTpre using a free form deformable registration and then fused with PETpre. The shifts between the PET image contours (42% of SUVmax) for PETpre and PETpost were obtained at the needle position. Results: For these extreme cases the displacement of the CT voxels along the needle path ranged from 2.9 to 8 mm with a mean of 5 mm. The shift of the PET image segmentation contours (42% of SUVmax) at the needle position ranged from 2.3 to 7 mm between the two scans. Conclusion: Evaluation of the mis-registration between the CT with the needle in place and the pre-biopsy PET can be obtained using deformable registration of the respective CT scans and can be used to indicate the need of a second PET in real-time. This work is supported in part by a grant from Biospace Lab, S.A

  10. Effects on automatic attention due to exposure to pictures of emotional faces while performing Chinese word judgment tasks.

    Science.gov (United States)

    Junhong, Huang; Renlai, Zhou; Senqi, Hu

    2013-01-01

    Two experiments were conducted to investigate the automatic processing of emotional facial expressions while performing low or high demand cognitive tasks under unattended conditions. In Experiment 1, 35 subjects performed low (judging the structure of Chinese words) and high (judging the tone of Chinese words) cognitive load tasks while exposed to unattended pictures of fearful, neutral, or happy faces. The results revealed that the reaction time was slower and the performance accuracy was higher while performing the low cognitive load task than while performing the high cognitive load task. Exposure to fearful faces resulted in significantly longer reaction times and lower accuracy than exposure to neutral faces on the low cognitive load task. In Experiment 2, 26 subjects performed the same word judgment tasks and their brain event-related potentials (ERPs) were measured for a period of 800 ms after the onset of the task stimulus. The amplitudes of the early component of ERP around 176 ms (P2) elicited by unattended fearful faces over frontal-central-parietal recording sites was significantly larger than those elicited by unattended neutral faces while performing the word structure judgment task. Together, the findings of the two experiments indicated that unattended fearful faces captured significantly more attention resources than unattended neutral faces on a low cognitive load task, but not on a high cognitive load task. It was concluded that fearful faces could automatically capture attention if residues of attention resources were available under the unattended condition.

  11. Pre-University Students' Errors in Integration of Rational Functions and Implications for Classroom Teaching

    Science.gov (United States)

    Yee, Ng Kin; Lam, Toh Tin

    2008-01-01

    This paper reports on students' errors in performing integration of rational functions, a topic of calculus in the pre-university mathematics classrooms. Generally the errors could be classified as those due to the students' weak algebraic concepts and their lack of understanding of the concept of integration. With the students' inability to link…

  12. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  13. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  14. Incorporating measurement error in n = 1 psychological autoregressive modeling

    Science.gov (United States)

    Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.

    2015-01-01

    Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988

  15. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  17. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  18. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  19. Test and cull of high risk Coxiella burnetii infected pregnant dairy goats is not feasible due to poor test performance.

    Science.gov (United States)

    Hogerwerf, Lenny; Koop, Gerrit; Klinkenberg, Don; Roest, Hendrik I J; Vellema, Piet; Nielen, Mirjam

    2014-05-01

    A major human Q fever epidemic occurred in The Netherlands during 2007-2009. In response, all pregnant goats from infected herds were culled before the 2010 kidding season without individual testing. The aim of this study was to assess whether high risk animals from recently infected naive herds can be identified by diagnostic testing. Samples of uterine fluid, milk and vaginal mucus from 203 euthanized pregnant goats were tested by PCR or ELISA. The results suggest that testing followed by culling of only the high risk animals is not a feasible method for protecting public health, mainly due to the low specificity of the tests and variability between herds. The risk of massive bacterial shedding during abortion or parturition can only be prevented by removal of all pregnant animals from naive recently infected herds. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  1. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  2. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  3. Patients struggle to access effective health care due to ongoing violence, distance, costs and health service performance in Afghanistan.

    Science.gov (United States)

    Nic Carthaigh, Niamh; De Gryse, Benoit; Esmati, Abdul Sattar; Nizar, Barak; Van Overloop, Catherine; Fricke, Renzo; Bseiso, Jehan; Baker, Corinne; Decroo, Tom; Philips, Mit

    2015-05-01

    The Afghan population suffers from a long standing armed conflict. We investigated patients' experiences of their access to and use of the health services. Data were collected in four clinics from different provinces. Mixed methods were applied. The questions focused on access obstacles during the current health problem and health seeking behaviour during a previous illness episode of a household member. To access the health facilities 71.8% (545/759) of patients experienced obstacles. The combination of long distances, high costs and the conflict deprived people of life-saving healthcare. The closest public clinics were underused due to perceptions regarding their lack of availability or quality of staff, services or medicines. For one in five people, a lack of access to health care had resulted in death among family members or close friends within the last year. Violence continues to affect daily life and access to healthcare in Afghanistan. Moreover, healthcare provision is not adequately geared to meet medical and emergency needs. Impartial healthcare tailored to the context will be vital to increase access to basic and life-saving healthcare. © The Author 2014. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.

  4. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors.

    Directory of Open Access Journals (Sweden)

    Peter R Murphy

    Full Text Available Reaction time (RT is commonly observed to slow down after an error. This post-error slowing (PES has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES.

  5. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  6. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  7. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  8. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  9. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation Análise de sensibilidade do modelo APSIM/ORYZA na estimava de erros na radiação solar

    Directory of Open Access Journals (Sweden)

    Alexandre Bryan Heinemann

    2012-01-01

    Full Text Available Crop models are ideally suited to quantify existing climatic risks. However, they require historic climate data as input. While daily temperature and rainfall data are often available, the lack of observed solar radiation (Rs data severely limits site-specific crop modelling. The objective of this study was to estimate Rs based on air temperature solar radiation models and to quantify the propagation of errors in simulated radiation on several APSIM/ORYZA crop model seasonal outputs, yield, biomass, leaf area (LAI and total accumulated solar radiation (SRA during the crop cycle. The accuracy of the 5 models for estimated daily solar radiation was similar, and it was not substantially different among sites. For water limited environments (no irrigation, crop model outputs yield, biomass and LAI was not sensitive for the uncertainties in radiation models studied here.Modelos de simulação de culturas são importantes para quantificar riscos climáticos. Esses modelos necessitam de dados climáticos como dados de entrada. Entretanto, dados diários de precipitação pluvial e temperatura são facilmente encontrados, enquanto dados de radiação solar (Rs limitam-se à aplicação de modelos de simulação de culturas. O objetivo deste estudo foi estimar a Rs utilizando cinco modelos de estimativa de radiação solar com base na temperatura do ar e quantificar a propagação de erros na radiação simulada na produtividade, biomassa, área foliar e radiação solar acumulada durante o ciclo da cultura do arroz de terras altas simulados pelo modelo de simulação ORYZA/APSIM. A acurácia dos cinco modelos de estimativa da radiação solar foi similar e não foi diferente entre os diferentes locais. Para ambientes que ocorre estresse hídrico, as saídas do modelo ORYZA/APSIM produtividade, biomassa e índice de área foliar não foram sensíveis às incertezas provenientes da radiação solar estimadas neste estudo.

  10. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  11. Color extended visual cryptography using error diffusion.

    Science.gov (United States)

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  12. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  13. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  14. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  15. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  16. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  17. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  18. Impact of error fields on equilibrium configurations in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Barbato, Lucio [DIEI, Università di Cassino and Lazio Meridionale, Cassino (Italy); Formisano, Alessandro, E-mail: alessandro.formisano@unina2.it [Department of Industrial and Information Engineering, Seconda Univ. di Napoli, Aversa (Italy); Martone, Raffaele [Department of Industrial and Information Engineering, Seconda Univ. di Napoli, Aversa (Italy); Villone, Fabio [DIEI, Università di Cassino and Lazio Meridionale, Cassino (Italy)

    2015-10-15

    Highlights: • Error fields (EF) are discrepancies from nominal magnetic field, and may alter plasma behaviour. • They are due to, e.g., coils manufacturing and assembly errors. • EF impact in ITER equilibria is analyzed using numerical simulations. • A high accuracy 3D field computation module and a Grad-Shafranov solver are used. • Deformations size allow using a linearized model, and performing a sensitivity analysis. - Abstract: Discrepancies between design and actual magnetic field maps in tokamaks are unavoidable, and are associated to a number of causes, e.g. manufacturing and assembly tolerances on magnets, presence of feeders and joints, non-symmetric iron parts. Such error fields may drive plasma to loss of stability, and must be carefully controlled using suitable correction coils. Anyway, even when kept below safety threshold, error fields may alter the behavior of plasma. The present paper, using as example the error fields induced by tolerances in toroidal field coils, quantifies their effect on the plasma boundary shape in equilibrium configurations. In particular, a procedure able to compute the shape perturbations due to given deformations of the coils has been set up and used to carry out a thorough statistical analysis of the error field-shape perturbations relationship.

  19. On-Error Training (Book Excerpt).

    Science.gov (United States)

    Fukuda, Ryuji

    1985-01-01

    This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…

  20. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  1. Slope Error Measurement Tool for Solar Parabolic Trough Collectors: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Stynes, J. K.; Ihas, B.

    2012-04-01

    The National Renewable Energy Laboratory (NREL) has developed an optical measurement tool for parabolic solar collectors that measures the combined errors due to absorber misalignment and reflector slope error. The combined absorber alignment and reflector slope errors are measured using a digital camera to photograph the reflected image of the absorber in the collector. Previous work using the image of the reflection of the absorber finds the reflector slope errors from the reflection of the absorber and an independent measurement of the absorber location. The accuracy of the reflector slope error measurement is thus dependent on the accuracy of the absorber location measurement. By measuring the combined reflector-absorber errors, the uncertainty in the absorber location measurement is eliminated. The related performance merit, the intercept factor, depends on the combined effects of the absorber alignment and reflector slope errors. Measuring the combined effect provides a simpler measurement and a more accurate input to the intercept factor estimate. The minimal equipment and setup required for this measurement technique make it ideal for field measurements.

  2. Burnout, engagement and resident physicians' self-reported errors.

    Science.gov (United States)

    Prins, J T; van der Heijden, F M M A; Hoekstra-Weebers, J E H M; Bakker, A B; van de Wiel, H B M; Jacobs, B; Gazendam-Donofrio, S M

    2009-12-01

    Burnout is a work-related syndrome that may negatively affect more than just the resident physician. On the other hand, engagement has been shown to protect employees; it may also positively affect the patient care that the residents provide. Little is known about the relationship between residents' self-reported errors and burnout and engagement. In our national study that included all residents and physicians in The Netherlands, 2115 questionnaires were returned (response rate 41.1%). The residents reported on burnout (Maslach Burnout Inventory-Health and Social Services), engagement (Utrecht Work Engagement Scale) and self-assessed patient care practices (six items, two factors: errors in action/judgment, errors due to lack of time). Ninety-four percent of the residents reported making one or more mistake without negative consequences for the patient during their training. Seventy-one percent reported performing procedures for which they did not feel properly trained. More than half (56%) of the residents stated they had made a mistake with a negative consequence. Seventy-six percent felt they had fallen short in the quality of care they provided on at least one occasion. Men reported more errors in action/judgment than women. Significant effects of specialty and clinical setting were found on both types of errors. Residents with burnout reported significantly more errors (p engaged residents reported fewer errors (p burnout and to keep residents engaged in their work.

  3. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  4. Percutaneous vertebroplasty performed with an 18 G needle for the treatment of severe compression fracture of cervical vertebral body due to malignancy

    International Nuclear Information System (INIS)

    Chen Long; Ni Caifang; Wang Zhentang; Liu Yizhi; Jin Yonghai; Zhu Xiaoli; Zou Jianwei; Xiao Xiangsheng

    2010-01-01

    Objective: To investigate the clinical feasibility and efficacy of percutaneous vertebroplasty performed with an 18G needle for the treatment of severe compression fracture of cervical vertebral body due to malignancy. Methods: During the period of 2006-2010 percutaneous vertebroplasty was performed in 10 patients with severe compression fracture of cervical vertebral body due to metastatic lesions. A total of 12 diseased vertebral bodies were detected, which distributed in the C 4 (n = 3), C 5 (n = 3), C 6 (n = 4) and C 7 (n = 2) vertebral bodies. Under DSA guidance an 18G needle was punctured into the target vertebral body and then polymethylmethacrylate bone cement was injected in. A follow-up lasting for one month was conducted. Results: The technical success of both needle puncturing and bone cement injection was achieved in all patients. The mean amount of bone cement injected in each diseased vertebra was 2.2 ml(1.5-3.2)ml. Marked pain relief was quickly obtained in al1 10 patients. No major complications occurred in this series, except for asymptomatic bone cement leaking around vertebra which appeared in 4 vertebral bodies. Conclusion: Percutaneous vertebroplasty, which is performed with an 18G needle, is a safe and effective technique for the treatment of severe compression fracture of cervical vertebral body due to malignancy. (authors)

  5. Due diligence

    International Nuclear Information System (INIS)

    Sanghera, G.S.

    1999-01-01

    The Occupational Health and Safety (OHS) Act requires that every employer shall ensure the health and safety of workers in the workplace. Issues regarding the practices at workplaces and how they should reflect the standards of due diligence were discussed. Due diligence was described as being the need for employers to identify hazards in the workplace and to take active steps to prevent workers from potentially dangerous incidents. The paper discussed various aspects of due diligence including policy, training, procedures, measurement and enforcement. The consequences of contravening the OHS Act were also described

  6. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    Science.gov (United States)

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  7. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  8. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  9. Defining a roadmap for harmonizing quality indicators in Laboratory Medicine: a consensus statement on behalf of the IFCC Working Group "Laboratory Error and Patient Safety" and EFLM Task and Finish Group "Performance specifications for the extra-analytical phases".

    Science.gov (United States)

    Sciacovelli, Laura; Panteghini, Mauro; Lippi, Giuseppe; Sumarac, Zorica; Cadamuro, Janne; Galoro, César Alex De Olivera; Pino Castro, Isabel Garcia Del; Shcolnik, Wilson; Plebani, Mario

    2017-08-28

    The improving quality of laboratory testing requires a deep understanding of the many vulnerable steps involved in the total examination process (TEP), along with the identification of a hierarchy of risks and challenges that need to be addressed. From this perspective, the Working Group "Laboratory Errors and Patient Safety" (WG-LEPS) of International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) is focusing its activity on implementation of an efficient tool for obtaining meaningful information on the risk of errors developing throughout the TEP, and for establishing reliable information about error frequencies and their distribution. More recently, the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) has created the Task and Finish Group "Performance specifications for the extra-analytical phases" (TFG-PSEP) for defining performance specifications for extra-analytical phases. Both the IFCC and EFLM groups are working to provide laboratories with a system to evaluate their performances and recognize the critical aspects where improvement actions are needed. A Consensus Conference was organized in Padova, Italy, in 2016 in order to bring together all the experts and interested parties to achieve a consensus for effective harmonization of quality indicators (QIs). A general agreement was achieved and the main outcomes have been the release of a new version of model of quality indicators (MQI), the approval of a criterion for establishing performance specifications and the definition of the type of information that should be provided within the report to the clinical laboratories participating to the QIs project.

  10. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  11. H.264/AVC error resilience tools suitable for 3G mobile video services

    Institute of Scientific and Technical Information of China (English)

    LIU Lin; YE Xiu-zi; ZHANG San-yuan; ZHANG Yin

    2005-01-01

    The emergence of third generation mobile system (3G) makes video transmission in wireless environment possible,and the latest 3GPP/3GPP2 standards require 3G terminals support H.264/AVC. Due to high packet loss rate in wireless environment, error resilience for 3G terminals is necessary. Moreover, because of the hardware restrictions, 3G mobile terminals support only part of H.264/AVC error resilience tool. This paper analyzes various error resilience tools and their functions, and presents 2 error resilience strategies for 3G mobile streaming video services and mobile conversational services. Performances of the proposed error resilience strategies were tested using off-line common test conditions. Experiments showed that the proposed error resilience strategies can yield reasonably satisfactory results.

  12. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  13. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  14. On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model

    KAUST Repository

    Al-Quwaiee, Hessa

    2016-06-28

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.

  15. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  16. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  17. Investigation the gas film in micro scale induced error on the performance of the aerostatic spindle in ultra-precision machining

    Science.gov (United States)

    Chen, Dongju; Huo, Chen; Cui, Xianxian; Pan, Ri; Fan, Jinwei; An, Chenhui

    2018-05-01

    The objective of this work is to study the influence of error induced by gas film in micro-scale on the static and dynamic behavior of a shaft supported by the aerostatic bearings. The static and dynamic balance models of the aerostatic bearing are presented by the calculated stiffness and damping in micro scale. The static simulation shows that the deformation of aerostatic spindle system in micro scale is decreased. For the dynamic behavior, both the stiffness and damping in axial and radial directions are increased in micro scale. The experiments of the stiffness and rotation error of the spindle show that the deflection of the shaft resulting from the calculating parameters in the micro scale is very close to the deviation of the spindle system. The frequency information in transient analysis is similar to the actual test, and they are also higher than the results from the traditional case without considering micro factor. Therefore, it can be concluded that the value considering micro factor is closer to the actual work case of the aerostatic spindle system. These can provide theoretical basis for the design and machining process of machine tools.

  18. Mean-Square Error Due to Gradiometer Field Measuring Devices

    Science.gov (United States)

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  19. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    OpenAIRE

    Wild, Oliver; Prather, Michael J

    2006-01-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quant...

  20. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  1. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  2. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  3. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  4. Containment Performance Evaluation of a Sodium Fire Event Due to Air Ingress into the Cover Gas Region of the Reactor Vessel in the PGSFR

    International Nuclear Information System (INIS)

    Ahn, Sang June; Chang, Won-Pyo; Kang, Seok Hun; Choi, Chi-Woong; Yoo, Jin; Lee, Kwi Lim; Jeong, Jae-Ho; Lee, Seung Won; Jeong, Taekyeong; Ha, Kwi-Seok

    2015-01-01

    Comparing with the light water reactor, sodium as a reactor coolant violently reacts with oxygen in the containment atmosphere. Due to this chemical reaction, heat generated from the combustion heat increases the temperature and pressure in the containment atmosphere. The structural integrity of the containment building which is a final radiological defense barrier is threaten. A sodium fire event in the containment due to air ingress into the cover gas region in the reactor vessel is classified as one of the design basis events in the PGSFR. This event comes from a leak or crack on the reactor upper closure header surface. It accompanys an event of the radiological fission products release to the inside the containment. In this paper, evaluation for the sodium fire and radiological influence due to air ingress into the cover gas region of the reactor vessel is described. To evaluate this event, the CONTAIN-LMR, MACCS-II and OR-IGEN-II codes are used. For the sodium pool fire event in the containment, the performance evaluation and radiological influence are carried out. In the thermal hydraulic aspects, the 1 cell containment yields the most conservative result. In this event, the maximum temperature and pressure in the containment are calculated 0.185 MPa, 280.0 .deg. C, respectively. The radiological dose at the EAB and LPZ are below the acceptance criteria specified in the 10CFR100

  5. Containment Performance Evaluation of a Sodium Fire Event Due to Air Ingress into the Cover Gas Region of the Reactor Vessel in the PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang June; Chang, Won-Pyo; Kang, Seok Hun; Choi, Chi-Woong; Yoo, Jin; Lee, Kwi Lim; Jeong, Jae-Ho; Lee, Seung Won; Jeong, Taekyeong; Ha, Kwi-Seok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Comparing with the light water reactor, sodium as a reactor coolant violently reacts with oxygen in the containment atmosphere. Due to this chemical reaction, heat generated from the combustion heat increases the temperature and pressure in the containment atmosphere. The structural integrity of the containment building which is a final radiological defense barrier is threaten. A sodium fire event in the containment due to air ingress into the cover gas region in the reactor vessel is classified as one of the design basis events in the PGSFR. This event comes from a leak or crack on the reactor upper closure header surface. It accompanys an event of the radiological fission products release to the inside the containment. In this paper, evaluation for the sodium fire and radiological influence due to air ingress into the cover gas region of the reactor vessel is described. To evaluate this event, the CONTAIN-LMR, MACCS-II and OR-IGEN-II codes are used. For the sodium pool fire event in the containment, the performance evaluation and radiological influence are carried out. In the thermal hydraulic aspects, the 1 cell containment yields the most conservative result. In this event, the maximum temperature and pressure in the containment are calculated 0.185 MPa, 280.0 .deg. C, respectively. The radiological dose at the EAB and LPZ are below the acceptance criteria specified in the 10CFR100.

  6. Errors on interrupter tasks presented during spatial and verbal working memory performance are linearly linked to large-scale functional network connectivity in high temporal resolution resting state fMRI.

    Science.gov (United States)

    Magnuson, Matthew Evan; Thompson, Garth John; Schwarb, Hillary; Pan, Wen-Ju; McKinley, Andy; Schumacher, Eric H; Keilholz, Shella Dawn

    2015-12-01

    The brain is organized into networks composed of spatially separated anatomical regions exhibiting coherent functional activity over time. Two of these networks (the default mode network, DMN, and the task positive network, TPN) have been implicated in the performance of a number of cognitive tasks. To directly examine the stable relationship between network connectivity and behavioral performance, high temporal resolution functional magnetic resonance imaging (fMRI) data were collected during the resting state, and behavioral data were collected from 15 subjects on different days, exploring verbal working memory, spatial working memory, and fluid intelligence. Sustained attention performance was also evaluated in a task interleaved between resting state scans. Functional connectivity within and between the DMN and TPN was related to performance on these tasks. Decreased TPN resting state connectivity was found to significantly correlate with fewer errors on an interrupter task presented during a spatial working memory paradigm and decreased DMN/TPN anti-correlation was significantly correlated with fewer errors on an interrupter task presented during a verbal working memory paradigm. A trend for increased DMN resting state connectivity to correlate to measures of fluid intelligence was also observed. These results provide additional evidence of the relationship between resting state networks and behavioral performance, and show that such results can be observed with high temporal resolution fMRI. Because cognitive scores and functional connectivity were collected on nonconsecutive days, these results highlight the stability of functional connectivity/cognitive performance coupling.

  7. Combining principles of Cognitive Load Theory and diagnostic error analysis for designing job aids: Effects on motivation and diagnostic performance in a process control task.

    Science.gov (United States)

    Kluge, Annette; Grauel, Britta; Burkolter, Dina

    2013-03-01

    Two studies are presented in which the design of a procedural aid and the impact of an additional decision aid for process control were assessed. In Study 1, a procedural aid was developed that avoids imposing unnecessary extraneous cognitive load on novices when controlling a complex technical system. This newly designed procedural aid positively affected germane load, attention, satisfaction, motivation, knowledge acquisition and diagnostic speed for novel faults. In Study 2, the effect of a decision aid for use before the procedural aid was investigated, which was developed based on an analysis of diagnostic errors committed in Study 1. Results showed that novices were able to diagnose both novel faults and practised faults, and were even faster at diagnosing novel faults. This research contributes to the question of how to optimally support novices in dealing with technical faults in process control. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Use of Balance Calibration Certificate to Calculate the Errors of Indication and Measurement Uncertainty in Mass Determinations Performed in Medical Laboratories

    Directory of Open Access Journals (Sweden)

    Adriana VÂLCU

    2011-09-01

    Full Text Available Based on the reference document, the article proposes the way to calculate the errors of indication and associated measurement uncertainties, by resorting to the general information provided by the calibration certificate of a balance (non-automatic weighing instruments, shortly NAWI used in medical field. The paper may be also considered a useful guideline for: operators working in laboratories accredited in medical (or other various fields where the weighing operations are part of their testing activities; test houses, laboratories, or manufacturers using calibrated non-automatic weighing instruments for measurements relevant for the quality of production subject to QM requirements (e.g. ISO 9000 series, ISO 10012, ISO/IEC 17025; bodies accrediting laboratories; accredited laboratories for the calibration of NAWI. Article refers only to electronic weighing instruments having maximum capacity up to 30 kg. Starting from the results provided by a calibration certificate it is presented an example of calculation.

  9. Errors and untimely radiodiagnosis of occupational diseases

    International Nuclear Information System (INIS)

    Sokolik, L.I.; Shkondin, A.N.; Sergienko, N.S.; Doroshenko, A.N.; Shumakov, A.V.

    1987-01-01

    Most errors in the diagnosis of occupational diseases occur due to hyperdiagnosis (37%), because data of dynamic clinico-roentgenological examination were not considered (23%). Defects in the organization of prophylactic fluorography results in untimely diagnosis of dust-induced occupational diseases. Errors also occurred because working conditions were not always considered atypical development and course were not always analyzed

  10. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.

    2014-04-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  11. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  12. Task failure during exercise to exhaustion in normoxia and hypoxia is due to reduced muscle activation caused by central mechanisms while muscle metaboreflex does not limit performance

    Directory of Open Access Journals (Sweden)

    Rafael eTorres-Peralta

    2016-01-01

    Full Text Available To determine whether task failure during incremental exercise to exhaustion (IE is principally due to reduced neural drive and increased metaboreflex activation eleven men (22±2 years performed a 10s control isokinetic sprint (IS; 80 rpm after a short warm-up. This was immediately followed by an IE in normoxia (Nx, PIO2:143 mmHg and hypoxia (Hyp, PIO2:73 mmHg in random order, separated by a 120 min resting period. At exhaustion, the circulation of both legs was occluded instantaneously (300 mmHg during 10 or 60s to impede recovery and increase metaboreflex activation. This was immediately followed by an IS with open circulation. Electromyographic recordings were obtained from the vastus medialis and lateralis. Muscle biopsies and blood gases were obtained in separate experiments. During the last 10s of the IE, pulmonary ventilation, VO2, power output and muscle activation were lower in hypoxia than in normoxia, while pedaling rate was similar. Compared to the control sprint, performance (IS-Wpeak was reduced to a greater extent after the IE-Nx (11% lower P<0.05 than IE-Hyp. The root mean square (EMGRMS was reduced by 38 and 27% during IS performed after IE-Nx and IE-Hyp, respectively (Nx vs. Hyp: P<0.05. Post-ischemia IS-EMGRMS values were higher than during the last 10s of IE. Sprint exercise mean (IS-MPF and median (IS-MdPF power frequencies, and burst duration, were more reduced after IE-Nx than IE-Hyp (P<0.05. Despite increased muscle lactate accumulation, acidification, and metaboreflex activation from 10 to 60s of ischemia, IS-Wmean (+23% and burst duration (+10% increased, while IS-EMGRMS decreased (-24%, P<0.05, with IS-MPF and IS-MdPF remaining unchanged. In conclusion, close to task failure, muscle activation is lower in hypoxia than in normoxia. Task failure is predominantly caused by central mechanisms, which recover to great extent within one minute even when the legs remain ischemic. There is dissociation between the recovery of

  13. Failures without errors: quantification of context in HRA

    International Nuclear Information System (INIS)

    Fujita, Yushi; Hollnagel, Erik

    2004-01-01

    PSA-cum-human reliability analysis (HRA) has traditionally used individual human actions, hence individual 'human errors', as a meaningful unit of analysis. This is inconsistent with the current understanding of accidents, which points out that the notion of 'human error' is ill defined and that adverse events more often are the due to the working conditions than to people. Several HRA approaches, such as ATHEANA and CREAM have recognised this conflict and proposed ways to deal with it. This paper describes an improvement of the basic screening method in CREAM, whereby a rating of the performance conditions can be used to calculate a Mean Failure Rate directly without invoking the notion of human error

  14. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  16. Reader error during CT colonography: causes and implications for training

    International Nuclear Information System (INIS)

    Slater, Andrew; Tam, Emily; Gartner, Louise; Scarth, Julia; Peiris, Chand; Gupta, Arun; Marshall, Michele; Burling, David; Taylor, Stuart A.; Halligan, Steve

    2006-01-01

    This study investigated the variability in baseline computed tomography colonography (CTC) performance using untrained readers by documenting sources of error to guide future training requirements. Twenty CTC endoscopically validated data sets containing 32 polyps were consensus read by three unblinded radiologists experienced in CTC, creating a reference standard. Six readers without prior CTC training [four residents and two board-certified subspecialty gastrointestinal (GI) radiologists] read the 20 cases. Readers drew a region of interest (ROI) around every area they considered a potential colonic lesion, even if subsequently dismissed, before creating a final report. Using this final report, reader ROIs were classified as true positive detections, true negatives correctly dismissed, true detections incorrectly dismissed (i.e., classification error), or perceptual errors. Detection of polyps 1-5 mm, 6-9 mm, and ≥10 mm ranged from 7.1% to 28.6%, 16.7% to 41.7%, and 16.7% to 83.3%, respectively. There was no significant difference between polyp detection or false positives for the GI radiologists compared with residents (p=0.67, p=0.4 respectively). Most missed polyps were due to failure of detection rather than characterization (range 82-95%). Untrained reader performance is variable but generally poor. Most missed polyps are due perceptual error rather than characterization, suggesting basic training should focus heavily on lesion detection. (orig.)

  17. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  18. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  19. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  20. Analytical expression for the bit error rate of cascaded all-optical regenerators

    DEFF Research Database (Denmark)

    Mørk, Jesper; Öhman, Filip; Bischoff, S.

    2003-01-01

    We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed.......We derive an approximate analytical expression for the bit error rate of cascaded fiber links containing all-optical 2R-regenerators. A general analysis of the interplay between noise due to amplification and the degree of reshaping (nonlinearity) of the regenerator is performed....

  1. A Novel Approach of Understanding and Incorporating Error of Chemical Transport Models into a Geostatistical Framework

    Science.gov (United States)

    Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.

    2015-12-01

    The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.

  2. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  3. Prevalence of Refractive errors among Primary School Pupils in ...

    African Journals Online (AJOL)

    Administrator

    Effective management of blindness due to refractive errors is readily available in developed countries. 1 ... Key words: Refractive errors, Children, Prevalence, Kenya. 165 .... financial support towards the funding of this study. REFERENCES. 1.

  4. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  5. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  6. Optical performance of the SO/PHI full disk telescope due to temperature gradients effect on the heat rejection entrance window

    Science.gov (United States)

    Garranzo, D.; Núñez, A.; Zuluaga-Ramírez, P.; Barandiarán, J.; Fernández-Medina, A.; Belenguer, T.; Álvarez-Herrero, A.

    2017-11-01

    The Polarimetric Helioseismic Imager for Solar Orbiter (SO/PHI) is an instrument on board in the Solar Orbiter mission. The Full Disk Telescope (FDT) will have the capability of providing images of the solar disk in all orbital faces with an image quality diffraction-limited. The Heat Rejection Entrance Window (HREW) is the first optical element of the instrument. Its function is to protect the instrument by filtering most of the Solar Spectrum radiation. The HREW consists of two parallel-plane plates made from Suprasil and each surface has a coating with a different function: an UV shield coating, a low pass band filter coating, a high pass band filter coating and an IR shield coating, respectively. The temperature gradient on the HREW during the mission produces a distortion of the transmitted wave-front due to the dependence of the refractive index with the temperature (thermo-optic effect) mainly. The purpose of this work is to determine the capability of the PHI/FDT refocusing system to compensate this distortion. A thermal gradient profile has been considered for each surface of the plates and a thermal-elastic analysis has been done by Finite Element Analysis to determine the deformation of the optical elements. The Optical Path Difference (OPD) between the incident and transmitted wavefronts has been calculated as a function of the ray tracing and the thermo-optic effect on the optical properties of Suprasil (at the work wavelength of PHI) by means of mathematical algorithms based on the 3D Snell Law. The resultant wavefronts have been introduced in the optical design of the FDT to evaluate the performance degradation of the image at the scientific focal plane and to estimate the capability of the PHI refocusing system for maintaining the image quality diffraction-limited. The analysis has been carried out considering two different situations: thermal gradients due to on axis attitude of the instrument and thermal gradients due to 1° off pointing attitude

  7. Repeated speech errors: evidence for learning.

    Science.gov (United States)

    Humphreys, Karin R; Menzies, Heather; Lake, Johanna K

    2010-11-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  9. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D [Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  10. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    Science.gov (United States)

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  11. GPS/DR Error Estimation for Autonomous Vehicle Localization

    Directory of Open Access Journals (Sweden)

    Byung-Hyun Lee

    2015-08-01

    Full Text Available Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  12. Handling of uncertainty due to interference fringe in FT-NIR transmittance spectroscopy - Performance comparison of interference elimination techniques using glucose-water system

    Science.gov (United States)

    Beganović, Anel; Beć, Krzysztof B.; Henn, Raphael; Huck, Christian W.

    2018-05-01

    The applicability of two elimination techniques for interferences occurring in measurements with cells of short pathlength using Fourier transform near-infrared (FT-NIR) spectroscopy was evaluated. Due to the growing interest in the field of vibrational spectroscopy in aqueous biological fluids (e.g. glucose in blood), aqueous solutions of D-(+)-glucose were prepared and split into a calibration set and an independent validation set. All samples were measured with two FT-NIR spectrometers at various spectral resolutions. Moving average smoothing (MAS) and fast Fourier transform filter (FFT filter) were applied to the interference affected FT-NIR spectra in order to eliminate the interference pattern. After data pre-treatment, partial least squares regression (PLSR) models using different NIR regions were constructed using untreated (interference affected) spectra and spectra treated with MAS and FFT filter. The prediction of the independent validation set revealed information about the performance of the utilized interference elimination techniques, as well as the different NIR regions. The results showed that the combination band of water at approx. 5200 cm-1 is of great importance since its performance was superior to the one of the so-called first overtone of water at approx. 6800 cm-1. Furthermore, this work demonstrated that MAS and FFT filter are fast and easy-to-use techniques for the elimination of interference fringes in FT-NIR transmittance spectroscopy.

  13. The Economic Impact of Loss of Performance Due to Absenteeism and Presenteeism Caused by Depressive Symptoms and Comorbid Health Conditions among Japanese Workers

    Science.gov (United States)

    WADA, Koji; ARAKIDA, Mikako; WATANABE, Rika; NEGISHI, Motomi; SATO, Jun; TSUTSUMI, Akizumi

    2013-01-01

    We aimed to determine the economic impact of absenteeism and presenteeism from five conditions potentially comorbid with depressive symptoms—back or neck disorders, depression, anxiety, or emotional disorders, chronic headaches, stomach or bowel disorders, and insomnia—among Japanese workers aged 18–59 yr. Participants from 19 workplaces anonymously completed Stanford Presenteeism Scale questionnaires. Participants identified one primary health condition and determined the resultant performance loss (0–100%) over the previous 4-wk period. We estimated the wage loss by gender, using 10-yr age bands. A total of 6,777 participants undertook the study. Of these, we extracted the data for those in the 18–59 yr age band who chose targeted primary health conditions (males, 2,535; females 2,465). The primary health condition identified was back or neck disorders. We found that wage loss due to presenteeism and absenteeism per 100 workers across all 10-yr age bands was high for back or neck disorders. Wage loss per person was relatively high among those identifying depression, anxiety, or emotional disorders. These findings offer insight into developing strategies for workplace interventions on increasing work performance. PMID:23892900

  14. The economic impact of loss of performance due to absenteeism and presenteeism caused by depressive symptoms and comorbid health conditions among Japanese workers.

    Science.gov (United States)

    Wada, Koji; Arakida, Mikako; Watanabe, Rika; Negishi, Motomi; Sato, Jun; Tsutsumi, Akizumi

    2013-01-01

    We aimed to determine the economic impact of absenteeism and presenteeism from five conditions potentially comorbid with depressive symptoms-back or neck disorders, depression, anxiety, or emotional disorders, chronic headaches, stomach or bowel disorders, and insomnia-among Japanese workers aged 18-59 yr. Participants from 19 workplaces anonymously completed Stanford Presenteeism Scale questionnaires. Participants identified one primary health condition and determined the resultant performance loss (0-100%) over the previous 4-wk period. We estimated the wage loss by gender, using 10-yr age bands. A total of 6,777 participants undertook the study. Of these, we extracted the data for those in the 18-59 yr age band who chose targeted primary health conditions (males, 2,535; females 2,465). The primary health condition identified was back or neck disorders. We found that wage loss due to presenteeism and absenteeism per 100 workers across all 10-yr age bands was high for back or neck disorders. Wage loss per person was relatively high among those identifying depression, anxiety, or emotional disorders. These findings offer insight into developing strategies for workplace interventions on increasing work performance.

  15. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  16. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  17. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  18. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  19. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  1. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  2. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  3. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  4. Re-Normalization Method of Doppler Lidar Signal for Error Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.

  5. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  6. The effect of subject measurement error on joint kinematics in the conventional gait model: Insights from the open-source pyCGM tool using high performance computing methods.

    Science.gov (United States)

    Schwartz, Mathew; Dixon, Philippe C

    2018-01-01

    The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided

  7. Structural damage detection robust against time synchronization errors

    International Nuclear Information System (INIS)

    Yan, Guirong; Dyke, Shirley J

    2010-01-01

    Structural damage detection based on wireless sensor networks can be affected significantly by time synchronization errors among sensors. Precise time synchronization of sensor nodes has been viewed as crucial for addressing this issue. However, precise time synchronization over a long period of time is often impractical in large wireless sensor networks due to two inherent challenges. First, time synchronization needs to be performed periodically, requiring frequent wireless communication among sensors at significant energy cost. Second, significant time synchronization errors may result from node failures which are likely to occur during long-term deployment over civil infrastructures. In this paper, a damage detection approach is proposed that is robust against time synchronization errors in wireless sensor networks. The paper first examines the ways in which time synchronization errors distort identified mode shapes, and then proposes a strategy for reducing distortion in the identified mode shapes. Modified values for these identified mode shapes are then used in conjunction with flexibility-based damage detection methods to localize damage. This alternative approach relaxes the need for frequent sensor synchronization and can tolerate significant time synchronization errors caused by node failures. The proposed approach is successfully demonstrated through numerical simulations and experimental tests in a lab

  8. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  9. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  10. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  11. The effects of error augmentation on learning to walk on a narrow balance beam.

    Science.gov (United States)

    Domingo, Antoinette; Ferris, Daniel P

    2010-10-01

    Error augmentation during training has been proposed as a means to facilitate motor learning due to the human nervous system's reliance on performance errors to shape motor commands. We studied the effects of error augmentation on short-term learning of walking on a balance beam to determine whether it had beneficial effects on motor performance. Four groups of able-bodied subjects walked on a treadmill-mounted balance beam (2.5-cm wide) before and after 30 min of training. During training, two groups walked on the beam with a destabilization device that augmented error (Medium and High Destabilization groups). A third group walked on a narrower beam (1.27-cm) to augment error (Narrow). The fourth group practiced walking on the 2.5-cm balance beam (Wide). Subjects in the Wide group had significantly greater improvements after training than the error augmentation groups. The High Destabilization group had significantly less performance gains than the Narrow group in spite of similar failures per minute during training. In a follow-up experiment, a fifth group of subjects (Assisted) practiced with a device that greatly reduced catastrophic errors (i.e., stepping off the beam) but maintained similar pelvic movement variability. Performance gains were significantly greater in the Wide group than the Assisted group, indicating that catastrophic errors were important for short-term learning. We conclude that increasing errors during practice via destabilization and a narrower balance beam did not improve short-term learning of beam walking. In addition, the presence of qualitatively catastrophic errors seems to improve short-term learning of walking balance.

  12. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  13. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  14. Negligence, genuine error, and litigation

    Science.gov (United States)

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  15. Extraordinary improvement of gas-sensing performances in SnO2 nanofibers due to creation of local p-n heterojunctions by loading reduced graphene oxide nanosheets.

    Science.gov (United States)

    Lee, Jae-Hyoung; Katoch, Akash; Choi, Sun-Woo; Kim, Jae-Hun; Kim, Hyoun Woo; Kim, Sang Sub

    2015-02-11

    We propose a novel approach to improve the gas-sensing properties of n-type nanofibers (NFs) that involves creation of local p-n heterojunctions with p-type reduced graphene oxide (RGO) nanosheets (NSs). This work investigates the sensing behaviors of n-SnO2 NFs loaded with p-RGO NSs as a model system. n-SnO2 NFs demonstrated greatly improved gas-sensing performances when loaded with an optimized amount of p-RGO NSs. Loading an optimized amount of RGOs resulted in a 20-fold higher sensor response than that of pristine SnO2 NFs. The sensing mechanism of monolithic SnO2 NFs is based on the joint effects of modulation of the potential barrier at nanograin boundaries and radial modulation of the electron-depletion layer. In addition to the sensing mechanisms described above, enhanced sensing was obtained for p-RGO NS-loaded SnO2 NFs due to creation of local p-n heterojunctions, which not only provided a potential barrier, but also functioned as a local electron absorption reservoir. These mechanisms markedly increased the resistance of SnO2 NFs, and were the origin of intensified resistance modulation during interaction of analyte gases with preadsorbed oxygen species or with the surfaces and grain boundaries of NFs. The approach used in this work can be used to fabricate sensitive gas sensors based on n-type NFs.

  16. Comparison of sensitivity to artificial spectral errors and multivariate LOD in NIR spectroscopy - Determining the performance of miniaturizations on melamine in milk powder.

    Science.gov (United States)

    Henn, Raphael; Kirchler, Christian G; Grossgut, Maria-Elisabeth; Huck, Christian W

    2017-05-01

    This study compared three commercially available spectrometers - whereas two of them were miniaturized - in terms of prediction ability of melamine in milk powder (infant formula). Therefore all spectra were split into calibration- and validation-set using Kennard Stone and Duplex algorithm in comparison. For each instrument the three best performing PLSR models were constructed using SNV and Savitzky Golay derivatives. The best RMSEP values were 0.28g/100g, 0.33g/100g and 0.27g/100g for the NIRFlex N-500, the microPHAZIR and the microNIR2200 respectively. Furthermore the multivariate LOD interval [LOD min , LOD max ] was calculated for all the PLSR models unveiling significant differences among the spectrometers showing values of 0.20g/100g - 0.27g/100g, 0.28g/100g - 0.54g/100g and 0.44g/100g - 1.01g/100g for the NIRFlex N-500, the microPHAZIR and the microNIR2200 respectively. To assess the robustness of all models, artificial introduction of white noise, baseline shift, multiplicative effect, spectral shrink and stretch, stray light and spectral shift were applied. Monitoring the RMSEP as function of the perturbation gave indication of robustness of the models and helped to compare the performances of the spectrometers. Not taking the additional information from the LOD calculations into account one could falsely assume that all the spectrometers perform equally well which is not the case when the multivariate evaluation and robustness data were considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Analysis of Employee's Survey for Preventing Human-Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses.

  18. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    Science.gov (United States)

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures

  19. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  20. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  1. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  2. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  3. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  4. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  5. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  6. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  7. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  8. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  9. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  10. Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors

    Science.gov (United States)

    Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean

    2016-01-01

    A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?

  11. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  12. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  13. Application of individually performed acrylic cement spacers containing 5% of antibiotic in two-stage revision of hip and knee prosthesis due to infection.

    Science.gov (United States)

    Babiak, Ireneusz

    2012-07-03

    Deep infection of a joint endoprosthesis constitutes a threat to the stability of the implant and joint function. It requires a comprehensive and interdisciplinary approach, involving the joint revision and removal of the bacterial biofilm from all tissues, the endoprosthesis must be often removed and bone stock infection treated. The paper presents the author's experience with the use of acrylic cement spacers, custom-made during the surgery and containing low dose of an antibiotic supplemented with 5% of a selected, targeted antibiotic for the infection of hip and knee endoprostheses. 33 two-stage revisions of knee and hip joints with the use of a spacer were performed. They involved 24 knee joints and 9 hip joints. The infections were mostly caused by staphylococci MRSA (18) and MSSA (8), and in some cases Enterococci (4), Salmonella (1), Pseudomonas (1) and Acinetobacter (1). The infection was successfully treated in 31 out of 33 cases (93.93%), including 8 patients with the hip infection and 23 patients with the knee infection. The endoprosthesis was reimplanted in 30 cases: for 7 hips and 23 knees, in 3 remaining cases the endoprosthesis was not reimplanted. Mechanical complications due to the spacer occurred in 4 cases: 3 dislocations and 1 fracture (hip spacer). The patients with hip spacers were ambulatory with a partial weight bearing of the operated extremity and those with knee spacers were also ambulatory with a partial weight bearing, but the extremity was initially protected by an orthosis. The spacer enables to maintain a limb function, and making it by hand allows the addition of the specific bacteria targeted antibiotic thus increasing the likelihood of the effective antibacterial treatment.

  14. A Model to Assess the Risk of Ice Accretion Due to Ice Crystal Ingestion in a Turbofan Engine and its Effects on Performance

    Science.gov (United States)

    Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.

    2013-01-01

    The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to

  15. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  16. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  17. WACC: Definition, misconceptions and errors

    OpenAIRE

    Fernandez, Pablo

    2011-01-01

    The WACC is just the rate at which the Free Cash Flows must be discounted to obtain the same result as in the valuation using Equity Cash Flows discounted at the required return to equity (Ke) The WACC is neither a cost nor a required return: it is a weighted average of a cost and a required return. To refer to the WACC as the "cost of capital" may be misleading because it is not a cost. The paper includes 7 errors due to not remembering the definition of WACC and shows the relationship betwe...

  18. Soft error modeling and analysis of the Neutron Intercepting Silicon Chip (NISC)

    International Nuclear Information System (INIS)

    Celik, Cihangir; Unlue, Kenan; Narayanan, Vijaykrishnan; Irwin, Mary J.

    2011-01-01

    Soft errors are transient errors caused due to excess charge carriers induced primarily by external radiations in the semiconductor devices. Soft error phenomena could be used to detect thermal neutrons with a neutron monitoring/detection system by enhancing soft error occurrences in the memory devices. This way, one can convert all semiconductor memory devices into neutron detection systems. Such a device is being developed at The Pennsylvania State University and named Neutron Intercepting Silicon Chip (NISC). The NISC is envisioning a miniature, power efficient, and active/passive operation neutron sensor/detector system. NISC aims to achieve this goal by introducing 10 B-enriched Borophosphosilicate Glass (BPSG) insulation layers in the semiconductor memories. In order to model and analyze the NISC, an analysis tool using Geant4 as the transport and tracking engine is developed for the simulation of the charged particle interactions in the semiconductor memory model, named NISC Soft Error Analysis Tool (NISCSAT). A simple model with 10 B-enriched layer on top of the lumped silicon region is developed in order to represent the semiconductor memory node. Soft error probability calculations were performed via the NISCSAT with both single node and array configurations to investigate device scaling by using different node dimensions in the model. Mono-energetic, mono-directional thermal and fast neutrons are used as the neutron sources. Soft error contribution due to the BPSG layer is also investigated with different 10 B contents and the results are presented in this paper.

  19. Alpha-particle-induced soft errors in high speed bipolar RAM

    International Nuclear Information System (INIS)

    Mitsusada, Kazumichi; Kato, Yukio; Yamaguchi, Kunihiko; Inadachi, Masaaki

    1980-01-01

    As bipolar RAM (Random Access Memory) has been improved to a fast acting and highly integrated device, the problems negligible in the past have become the ones that can not be ignored. The problem of a-particles emitted from the radioactive substances in semiconductor package materials should be specifically noticed, which cause soft errors. The authors have produced experimentally the special 1 kbit bipolar RAM to investigate its soft errors. The package used was the standard 16 pin dual in-line type, with which the practical system mounting test and a-particle irradiation test have been performed. The results showed the occurrence of soft errors at the average rate of about 1 bit/700 device hour. It is concluded that the cause was due to the a-particles emitted from the package materials, and at the same time, it was found that the rate of soft error occurrence was able to be greatly reduced by shielding a-particles. The error rate significantly increased with the decrease of the stand-by current of memory cells and with the accumulated charge determined by time constant. The mechanism of soft error was also investigated, for which an approximate model to estimate the error rate by means of the effective noise charge due to a-particles and of the amount of reversible charges of memory cells is shown to compare it with the experimental results. (Wakatsuki, Y.)

  20. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  1. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  2. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  3. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of

  4. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  5. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  6. Effects of Atenolol on Growth Performance, Mortality Due to Ascites, Antioxidant Status and Some Blood Parameters in Broilers under Induced Ascites

    Directory of Open Access Journals (Sweden)

    mokhtar fathi

    2016-11-01

    Full Text Available Introduction Broiler chickens are intensively selected for productive traits. The management of these highly productive animals must be optimal to allow their full genetic potential to be expressed. If this is not done, inefficient production and several metabolic diseases such as ascites become apparent. Investigations in mammals indicated that the b- adrenoreceptor characteristics are differentially regulated by chronic hypoxia and play an important role in the cardiovascular system. The density of b-adrenergic receptors was higher in cardiac cells of ascites sensitive birds compared with ascites-resistant ones. Moreover, the characteristics of b-adreno receptors are different in cardiac cells of birds with right ventricular hypertrophy and heart failure compared with healthy birds. Treatment with the selective b1-adrenoceptor blocker, atenolol, abolished right ventricular hypertrophy in response to hypoxia compared with normoxic condition in rats. Materials and Methods This study investigated the comparative effects of different levels of atenolol Growth performance, Mortality due to ascites, antioxidant status and blood parameters in broilers under induced ascites. Six hundred one-day-old male broilers (Ross 308 in a completely randomized experimental design with four treatments (Positive control, negative control, and two levels of 30 and 60 ppm atenolol with five replicates of thirty birds were applied. Birds in positive control were reared in natural temperature without atenolol, the other bird groups were reared in cold temperature with 0, 30 and 60 ppm atenolol. The average daily feed intake (ADFI, average daily weight gain (ADWG and feed conversion ratio (FCR for each group of birds were calculated and mortality was daily weighed, recorded and used to correct the FCR. Observations were made daily to record the incidence of ascites and mortality. Diagnosis of ascites generally depends on observation of the following symptoms: (1 right

  7. Investigations on human error hazards in recent unintended trip events of Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Jang, Tong Il; Lee, Yong Hee; Shin, Kwang Hyeon [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    According to the Operational Performance Information System (OPIS) which has been operated to improve the public understanding by the KINS (Korea Institute of Nuclear Safety), unintended trip events by mainly human errors counted up to 38 cases (18.7%) from 2000 to 2011. Although the Nuclear Power Plant (NPP) industry in Korea has been making efforts to reduce the human errors which have largely contributed to trip events, the human error rate might keep increasing. Interestingly, digital based I and C systems is the one of the reduction factors of unintended reactor trips. Human errors, however, have occurred due to the digital based I and C systems because those systems require new or changed behaviors to the NPP operators. Therefore, it is necessary that the investigations of human errors consider a new methodology to find not only tangible behavior but also intangible behavior such as organizational behaviors. In this study we investigated human errors to find latent factors such as decisions and conditions in the all of the unintended reactor trip events during last dozen years. To find them, we applied the HFACS (Human Factors Analysis and Classification System) which is a commonly utilized tool for investigating human contributions to aviation accidents under a widespread evaluation scheme. The objective of this study is to find latent factors behind of human errors in nuclear reactor trip events. Therefore, a method to investigate unintended trip events by human errors and the results will be discussed in more detail.

  8. Investigations on human error hazards in recent unintended trip events of Korean nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Jang, Tong Il; Lee, Yong Hee; Shin, Kwang Hyeon

    2012-01-01

    According to the Operational Performance Information System (OPIS) which has been operated to improve the public understanding by the KINS (Korea Institute of Nuclear Safety), unintended trip events by mainly human errors counted up to 38 cases (18.7%) from 2000 to 2011. Although the Nuclear Power Plant (NPP) industry in Korea has been making efforts to reduce the human errors which have largely contributed to trip events, the human error rate might keep increasing. Interestingly, digital based I and C systems is the one of the reduction factors of unintended reactor trips. Human errors, however, have occurred due to the digital based I and C systems because those systems require new or changed behaviors to the NPP operators. Therefore, it is necessary that the investigations of human errors consider a new methodology to find not only tangible behavior but also intangible behavior such as organizational behaviors. In this study we investigated human errors to find latent factors such as decisions and conditions in the all of the unintended reactor trip events during last dozen years. To find them, we applied the HFACS (Human Factors Analysis and Classification System) which is a commonly utilized tool for investigating human contributions to aviation accidents under a widespread evaluation scheme. The objective of this study is to find latent factors behind of human errors in nuclear reactor trip events. Therefore, a method to investigate unintended trip events by human errors and the results will be discussed in more detail

  9. Teamwork and Clinical Error Reporting among Nurses in Korean Hospitals

    Directory of Open Access Journals (Sweden)

    Jee-In Hwang, PhD

    2015-03-01

    Conclusions: Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety.

  10. The decline and fall of Type II error rates

    Science.gov (United States)

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  11. refractive errors among secondary school students in Isuikwuato

    African Journals Online (AJOL)

    Eyamba

    STUDENTS IN ISUIKWUATO LOCAL GOVERNMENT AREA OF ... the prevalence and types of refractive errors among secondary school students ... KEYWORDS: Refractive error, Secondary School students, ametropia, .... interviews of the teachers as regards the general performance of those students with obvious visual.

  12. Intrinsic errors in transporting a single-spin qubit through a double quantum dot

    Science.gov (United States)

    Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.

    2017-07-01

    Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.

  13. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  14. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  15. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  16. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  17. Numerical study of the enhancement of combustion performance in a scramjet combustor due to injection of electric-discharge-activated oxygen molecules

    International Nuclear Information System (INIS)

    Starik, A M; Bezgin, L V; Kopchenov, V I; Loukhovitski, B I; Sharipov, A S; Titova, N S

    2013-01-01

    A comprehensive analysis of the efficiency of an approach based on the injection of a thin oxygen stream, subjected to a tailored electric discharge, into a supersonic H 2 –air flow to enhance the combustion performance in the mixing layer and in the scramjet combustor is conducted. It is shown that for such an approach there exist optimal values of reduced electric field E/N and transversal dimension d of the injected oxygen stream, which provide the minimal length of induction zone in the mixing layer. The optimal values of E/N and d depend on air flow parameters and the specific energy put into the oxygen. The injection of a thin oxygen stream (d = 1 mm) subjected to an electric discharge with E/N = 50–100 Td, which produces mostly singlet oxygen O 2 (a  1 Δ g ) and O 2 (b 1 Σ g + ) molecules and atomic oxygen, allows one to arrange stable combustion in a scramjet duct at an extremely low air temperature T air  = 900 K and pressure P air  = 0.3 bar even at a small specific energy put into the oxygen E s  = 0.2 J ncm −3 , and to provide rather high combustion completeness η = 0.73. The advance in the energy released during combustion is much higher (hundred times), in this case, than the energy supplied to the oxygen stream in the electric discharge. This approach also makes it possible to ensure the rather high combustion completeness in the scramjet combustor with reduced length. The main reason for the combustion enhancement of the H 2 –air mixture in the scramjet duct is the intensification of chain-branching reactions due to the injection of a small amount of cold non-equilibrium oxygen plasma comprising highly reactive species, O 2 (a  1 Δ g ) and O 2 (b 1 Σ g + ) molecules and O atoms, into the H 2 –air supersonic flow. (paper)

  18. Evaluation of a Web-based Error Reporting Surveillance System in a Large Iranian Hospital.

    Science.gov (United States)

    Askarian, Mehrdad; Ghoreishi, Mahboobeh; Akbari Haghighinejad, Hourvash; Palenik, Charles John; Ghodsi, Maryam

    2017-08-01

    Proper reporting of medical errors helps healthcare providers learn from adverse incidents and improve patient safety. A well-designed and functioning confidential reporting system is an essential component to this process. There are many error reporting methods; however, web-based systems are often preferred because they can provide; comprehensive and more easily analyzed information. This study addresses the use of a web-based error reporting system. This interventional study involved the application of an in-house designed "voluntary web-based medical error reporting system." The system has been used since July 2014 in Nemazee Hospital, Shiraz University of Medical Sciences. The rate and severity of errors reported during the year prior and a year after system launch were compared. The slope of the error report trend line was steep during the first 12 months (B = 105.727, P = 0.00). However, it slowed following launch of the web-based reporting system and was no longer statistically significant (B = 15.27, P = 0.81) by the end of the second year. Most recorded errors were no-harm laboratory types and were due to inattention. Usually, they were reported by nurses and other permanent employees. Most reported errors occurred during morning shifts. Using a standardized web-based error reporting system can be beneficial. This study reports on the performance of an in-house designed reporting system, which appeared to properly detect and analyze medical errors. The system also generated follow-up reports in a timely and accurate manner. Detection of near-miss errors could play a significant role in identifying areas of system defects.

  19. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  20. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...