WorldWideScience

Sample records for unequal error protection

  1. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  2. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  3. Improved Design of Unequal Error Protection LDPC Codes

    Directory of Open Access Journals (Sweden)

    Sandberg Sara

    2010-01-01

    Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.

  4. Enhancement of Unequal Error Protection Properties of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Poulliat Charly

    2007-01-01

    Full Text Available It has been widely recognized in the literature that irregular low-density parity-check (LDPC codes exhibit naturally an unequal error protection (UEP behavior. In this paper, we propose a general method to emphasize and control the UEP properties of LDPC codes. The method is based on a hierarchical optimization of the bit node irregularity profile for each sensitivity class within the codeword by maximizing the average bit node degree while guaranteeing a minimum degree as high as possible. We show that this optimization strategy is efficient, since the codes that we optimize show better UEP capabilities than the codes optimized for the additive white Gaussian noise channel.

  5. Designing an efficient LT-code with unequal error protection for image transmission

    Science.gov (United States)

    S. Marques, F.; Schwartz, C.; Pinho, M. S.; Finamore, W. A.

    2015-10-01

    The use of images from earth observation satellites is spread over different applications, such as a car navigation systems and a disaster monitoring. In general, those images are captured by on board imaging devices and must be transmitted to the Earth using a communication system. Even though a high resolution image can produce a better Quality of Service, it leads to transmitters with high bit rate which require a large bandwidth and expend a large amount of energy. Therefore, it is very important to design efficient communication systems. From communication theory, it is well known that a source encoder is crucial in an efficient system. In a remote sensing satellite image transmission, this efficiency is achieved by using an image compressor, to reduce the amount of data which must be transmitted. The Consultative Committee for Space Data Systems (CCSDS), a multinational forum for the development of communications and data system standards for space flight, establishes a recommended standard for a data compression algorithm for images from space systems. Unfortunately, in the satellite communication channel, the transmitted signal is corrupted by the presence of noise, interference signals, etc. Therefore, the receiver of a digital communication system may fail to recover the transmitted bit. Actually, a channel code can be used to reduce the effect of this failure. In 2002, the Luby Transform code (LT-code) was introduced and it was shown that it was very efficient when the binary erasure channel model was used. Since the effect of the bit recovery failure depends on the position of the bit in the compressed image stream, in the last decade many e orts have been made to develop LT-code with unequal error protection. In 2012, Arslan et al. showed improvements when LT-codes with unequal error protection were used in images compressed by SPIHT algorithm. The techniques presented by Arslan et al. can be adapted to work with the algorithm for image compression

  6. Unequal error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  7. Unequal Error Protected JPEG 2000 Broadcast Scheme with Progressive Fountain Codes

    OpenAIRE

    Chen, Zhao; Xu, Mai; Yin, Luiguo; Lu, Jianhua

    2012-01-01

    This paper proposes a novel scheme, based on progressive fountain codes, for broadcasting JPEG 2000 multimedia. In such a broadcast scheme, progressive resolution levels of images/video have been unequally protected when transmitted using the proposed progressive fountain codes. With progressive fountain codes applied in the broadcast scheme, the resolutions of images (JPEG 2000) or videos (MJPEG 2000) received by different users can be automatically adaptive to their channel qualities, i.e. ...

  8. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    admpather

    Resilience Scheme for JPEG Image Transmission using. OFDM ... of the Peak to Peak Signal to Noise power Ratio (PSNR) and the Mean Structural Similarity ..... transmission over wireless mobile networks or Wireless Local Area Networks. 6.

  9. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  10. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    Science.gov (United States)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  11. Is equal moral consideration really compatible with unequal moral status?

    Science.gov (United States)

    Rossi, John

    2010-09-01

    The issue of moral considerability, or how much moral importance a being's interests deserve, is one of the most important in animal ethics. Some leading theorists--most notably David DeGrazia--have argued that a principle of "equal moral consideration" is compatible with "unequal moral status." Such a position would reconcile the egalitarian force of equal consideration with more stringent obligations to humans than animals. The article presents arguments that equal consideration is not compatible with unequal moral status, thereby forcing those who would justify significantly different moral protections for humans and animals to argue for unequal consideration.

  12. Analysis of covariance with pre-treatment measurements in randomized trials: comparison of equal and unequal slopes.

    Science.gov (United States)

    Funatogawa, Ikuko; Funatogawa, Takashi

    2011-09-01

    In randomized trials, an analysis of covariance (ANCOVA) is often used to analyze post-treatment measurements with pre-treatment measurements as a covariate to compare two treatment groups. Random allocation guarantees only equal variances of pre-treatment measurements. We hence consider data with unequal covariances and variances of post-treatment measurements without assuming normality. Recently, we showed that the actual type I error rate of the usual ANCOVA assuming equal slopes and equal residual variances is asymptotically at a nominal level under equal sample sizes, and that of the ANCOVA with unequal variances is asymptotically at a nominal level, even under unequal sample sizes. In this paper, we investigated the asymptotic properties of the ANCOVA with unequal slopes for such data. The estimators of the treatment effect at the observed mean are identical between equal and unequal variance assumptions, and these are asymptotically normal estimators for the treatment effect at the true mean. However, the variances of these estimators based on standard formulas are biased, and the actual type I error rates are not at a nominal level, irrespective of variance assumptions. In equal sample sizes, the efficiency of the usual ANCOVA assuming equal slopes and equal variances is asymptotically the same as those of the ANCOVA with unequal slopes and higher than that of the ANCOVA with equal slopes and unequal variances. Therefore, the use of the usual ANCOVA is appropriate in equal sample sizes. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...

  14. Retaliation against reporters of unequal treatment: Failing employee protection in The Netherlands

    NARCIS (Netherlands)

    Svensson, Jorgen S.; van Genugten, M.L.

    2013-01-01

    Purpose – Equal treatment in the workplace is considered one of the most fundamental rights of employees. This right also implies that employees must be able to address any form of unequal treatment freely and effectively, without fear of retaliation. The purpose of this paper is to investigate the

  15. Reply to "Comment on `Protecting bipartite entanglement by quantum interferences' "

    Science.gov (United States)

    Das, Sumanta; Agarwal, G. S.

    2018-03-01

    In a recent Comment Nair and Arun, Phys. Rev. A 97, 036301 (2018), 10.1103/PhysRevA.97.036301, it was concluded that the two-qubit entanglement protection reported in our work [Das and Agarwal, Phys. Rev. A 81, 052341 (2010), 10.1103/PhysRevA.81.052341] is erroneous. While we acknowledge the error in analytical results on concurrence when dipole matrix elements were unequal, the essential conclusions on entanglement protection are not affected.

  16. Assessing Visibility of Individual Transmission Errors in Networked Video

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire

    2016-01-01

    could benefit from information about subjective visibility of individual packet losses; for example, computational resources could be directed more efficiently to unequal error protection and concealment by focusing in the visually most disturbing artifacts. In this paper, we present a novel subjective...... methodology for packet loss artifact detection by tapping a touchscreen where a defect is observed. To validate the proposed methodology, the results of a pilot study are presented and analyzed. According to the results, the proposed method can be used to derive qualitatively and statistically meaningful data...... on the subjective visibility of individual packet loss artifacts....

  17. Unequal Protection of Video Streaming through Adaptive Modulation with a Trizone Buffer over Bluetooth Enhanced Data Rate

    Directory of Open Access Journals (Sweden)

    Razavi Rouzbeh

    2008-01-01

    Full Text Available Abstract Bluetooth enhanced data rate wireless channel can support higher-quality video streams compared to previous versions of Bluetooth. Packet loss when transmitting compressed data has an effect on the delivered video quality that endures over multiple frames. To reduce the impact of radio frequency noise and interference, this paper proposes adaptive modulation based on content type at the video frame level and content importance at the macroblock level. Because the bit rate of protected data is reduced, the paper proposes buffer management to reduce the risk of buffer overflow. A trizone buffer is introduced, with a varying unequal protection policy in each zone. Application of this policy together with adaptive modulation results in up to 4 dB improvement in objective video quality compared to fixed rate scheme for an additive white Gaussian noise channel and around 10 dB for a Gilbert-Elliott channel. The paper also reports a consistent improvement in video quality over a scheme that adapts to channel conditions by varying the data rate without accounting for the video frame packet type or buffer congestion.

  18. Unequal Protection of Video Streaming through Adaptive Modulation with a Trizone Buffer over Bluetooth Enhanced Data Rate

    Directory of Open Access Journals (Sweden)

    Rouzbeh Razavi

    2007-12-01

    Full Text Available Bluetooth enhanced data rate wireless channel can support higher-quality video streams compared to previous versions of Bluetooth. Packet loss when transmitting compressed data has an effect on the delivered video quality that endures over multiple frames. To reduce the impact of radio frequency noise and interference, this paper proposes adaptive modulation based on content type at the video frame level and content importance at the macroblock level. Because the bit rate of protected data is reduced, the paper proposes buffer management to reduce the risk of buffer overflow. A trizone buffer is introduced, with a varying unequal protection policy in each zone. Application of this policy together with adaptive modulation results in up to 4 dB improvement in objective video quality compared to fixed rate scheme for an additive white Gaussian noise channel and around 10 dB for a Gilbert-Elliott channel. The paper also reports a consistent improvement in video quality over a scheme that adapts to channel conditions by varying the data rate without accounting for the video frame packet type or buffer congestion.

  19. On the Efficient Broadcasting of Heterogeneous Services over Band-Limited Channels: Unequal Power Allocation for Wavelet Packet Division Multiplexing

    Directory of Open Access Journals (Sweden)

    Maurizio Murroni

    2008-01-01

    Full Text Available Multiple transmission of heterogeneous services is a central aspect of broadcasting technology. Often, in this framework, the design of efficient communication systems is complicated by stringent bandwidth constraint. In wavelet packet division multiplexing (WPDM, the message signals are waveform coded onto wavelet packet basis functions. The overlapping nature of such waveforms in both time and frequency allows improving the performance over the commonly used FDM and TDM schemes, while their orthogonality properties permit to extract the message signals by a simple correlator receiver. Furthermore, the scalable structure of WPDM makes it suitable for broadcasting heterogeneous services. This work investigates unequal error protection (UEP of data which exhibit different sensitivities to channel errors to improve the performance of WPDM for transmission over band-limited channels. To cope with bandwidth constraint, an appropriate distribution of power among waveforms is proposed which is driven by the channel error sensitivities of the carried message signals in case of Gaussian noise. We address this problem by means of the genetic algorithms (GAs, which allow flexible suboptimal solution with reduced complexity. The mean square error (MSE between the original and the decoded message, which has a strong correlation with subjective perception, is used as an optimization criterion.

  20. University of Mauritius Research Journal - Vol 13 (2007)

    African Journals Online (AJOL)

    A Hybrid Unequal Error Protection / Unequal Error Resilience Scheme for JPEG Image Transmission using OFDM · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. TP Fowdur, KMS Soyjaudah, 57-68 ...

  1. The Unequal Power Relation in the Final Interpretation

    DEFF Research Database (Denmark)

    Almlund, Pernille

    2013-01-01

    if the interpretation also takes the unequal power relation into account. Consequently, interpreting the researched in a respectful manner is difficult. This article demonstrates the necessity of increasing awareness of the unequal power relation by posing, discussing and, to some extent answering, three methodological...... questions inspired by meta-theory that are significant for qualitative research and qualitative researchers to reflect on. This article concludes that respectful interpretation and consciously paying attention to the unequal power relation in the final interpretation require decentring the subject...

  2. Class-specific Error Bounds for Ensemble Classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.

  3. Quasi-human seniority-order algorithm for unequal circles packing

    International Nuclear Information System (INIS)

    Zhu, Dingju

    2016-01-01

    In the existing methods for solving unequal circles packing problems, the initial configuration is given arbitrarily or randomly, but the impact of different initial configurations for existing packing algorithm to the speed of existing packing algorithm solving unequal circles packing problems is very large. The quasi-human seniority-order algorithm proposed in this paper can generate a better initial configuration for existing packing algorithm to accelerate the speed of existing packing algorithm solving unequal circles packing problems. In experiments, the quasi-human seniority-order algorithm is applied to generate better initial configurations for quasi-physical elasticity methods to solve the unequal circles packing problems, and the experimental results show that the proposed quasi-human seniority-order algorithm can greatly improve the speed of solving the problem.

  4. Support of protective work of human error in a nuclear power plant

    International Nuclear Information System (INIS)

    Yoshizawa, Yuriko

    1999-01-01

    The nuclear power plant human factor group of the Tokyo Electric Power Co., Ltd. supports various protective work of human error conducted at the nuclear power plant. Its main researching theme are studies on human factor on operation of a nuclear power plant, and on recovery and common basic study on human factor. In addition, on a base of the obtained informations, assistance to protective work of human error conducted at the nuclear power plant as well as development for its actual use was also promoted. Especially, for actions sharing some dangerous informations, various assistances such as a proposal on actual example analytical method to effectively understand a dangerous information not facially but faithfully, construction of a data base to conveniently share such dangerous information, and practice on non-accident business survey for a hint of effective promotion of the protection work, were promoted. Here were introduced on assistance and investigation for effective sharing of the dangerous informations for various actions on protection of human error mainly conducted in nuclear power plant. (G.K.)

  5. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    Science.gov (United States)

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  6. Legal Marriage, Unequal Recognition, and Mental Health among Same-Sex Couples.

    Science.gov (United States)

    LeBlanc, Allen J; Frost, David M; Bowen, Kayla

    2018-04-01

    The authors examined whether the perception of unequal relationship recognition, a novel, couple-level minority stressor, has negative consequences for mental health among same-sex couples. Data came from a dyadic study of 100 ( N = 200) same-sex couples in the U.S. Being in a legal marriage was associated with lower perceived unequal recognition and better mental health; being in a registered domestic partnership or civil union - not also legally married - was associated with greater perceived unequal recognition and worse mental health. Actor Partner Interdependence Models tested associations between legal relationship status, unequal relationship recognition, and mental health (nonspecific psychological distress, depressive symptomatology, and problematic drinking), net controls (age, gender, race/ethnicity, education, and income). Unequal recognition was consistently associated with worse mental health, independent of legal relationship status. Legal changes affecting relationship recognition should not be seen as simple remedies for addressing the mental health effects of institutionalized discrimination.

  7. HIV / AIDS: An Unequal Burden

    Science.gov (United States)

    Skip Navigation Bar Home Current Issue Past Issues HIV / AIDS HIV / AIDS: An Unequal Burden Past Issues / Summer 2009 ... high-risk category, emphasizes Dr. Cargill. Photo: iStock HIV and Pregnancy Are there ways to help HIV- ...

  8. Unequal-Arms Michelson Interferometers

    Science.gov (United States)

    Tinto, Massimo; Armstrong, J. W.

    2000-01-01

    Michelson interferometers allow phase measurements many orders of magnitude below the phase stability of the laser light injected into their two almost equal-length arms. If, however, the two arms are unequal, the laser fluctuations can not be removed by simply recombining the two beams. This is because the laser jitters experience different time delays in the two arms, and therefore can not cancel at the photo detector. We present here a method for achieving exact laser noise cancellation, even in an unequal-arm interferometer. The method presented in this paper requires a separate readout of the relative phase in each arm, made by interfering the returning beam in each arm with a fraction of the outgoing beam. By linearly combining the two data sets with themselves, after they have been properly time shifted, we show that it is possible to construct a new data set that is free of laser fluctuations. An application of this technique to future planned space-based laser interferometer detector3 of gravitational radiation is discussed.

  9. Hecke algebras with unequal parameters

    CERN Document Server

    Lusztig, G

    2003-01-01

    Hecke algebras arise in representation theory as endomorphism algebras of induced representations. One of the most important classes of Hecke algebras is related to representations of reductive algebraic groups over p-adic or finite fields. In 1979, in the simplest (equal parameter) case of such Hecke algebras, Kazhdan and Lusztig discovered a particular basis (the KL-basis) in a Hecke algebra, which is very important in studying relations between representation theory and geometry of the corresponding flag varieties. It turned out that the elements of the KL-basis also possess very interesting combinatorial properties. In the present book, the author extends the theory of the KL-basis to a more general class of Hecke algebras, the so-called algebras with unequal parameters. In particular, he formulates conjectures describing the properties of Hecke algebras with unequal parameters and presents examples verifying these conjectures in particular cases. Written in the author's precise style, the book gives rese...

  10. Unequal recognition, misrecognition and injustice

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2012-01-01

    by the state of religious minorities. It argues that state–religion relations can be analysed as relations of recognition, which are not only unequal but also multi-dimensional, and that it is difficult to answer the question whether multi-dimensional recognitive inequalities are unjust or wrong if one...

  11. Equidistant Linear Network Codes with maximal Error-protection from Veronese Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    2012-01-01

    Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possible altered vectorspace. Ralf Koetter and Frank R. Kschischang in Coding for errors and erasures in random network coding (IEEE Transactions on Information Theory...... construct explicit families of vector-spaces of constant dimension where any pair of distinct vector-spaces are equidistant in the above metric. The parameters of the resulting linear network codes which have maximal error-protection are determined....

  12. An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard

    Science.gov (United States)

    Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi

    H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.

  13. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    Science.gov (United States)

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  14. Reducing Error, Fraud and Corruption (EFC) in Social Protection Programs

    OpenAIRE

    Tesliuc, Emil Daniel; Milazzo, Annamaria

    2007-01-01

    Social Protection (SP) and Social Safety Net (SSN) programs channel a large amount of public resources, it is important to make sure that these reach the intended beneficiaries. Error, fraud, or corruption (EFC) reduces the economic efficiency of these interventions by decreasing the amount of money that goes to the intended beneficiaries, and erodes the political support for the program. ...

  15. Kazhdan-Lusztig cells with unequal parameters

    CERN Document Server

    Bonnafé, Cédric

    2017-01-01

    This monograph provides a comprehensive introduction to the Kazhdan-Lusztig theory of cells in the broader context of the unequal parameter case. Serving as a useful reference, the present volume offers a synthesis of significant advances made since Lusztig’s seminal work on the subject was published in 2002. The focus lies on the combinatorics of the partition into cells for general Coxeter groups, with special attention given to induction methods, cellular maps and the role of Lusztig's conjectures. Using only algebraic and combinatorial methods, the author carefully develops proofs, discusses open conjectures, and presents recent research, including a chapter on the action of the cactus group. Kazhdan-Lusztig Cells with Unequal Parameters will appeal to graduate students and researchers interested in related subject areas, such as Lie theory, representation theory, and combinatorics of Coxeter groups. Useful examples and various exercises make this book suitable for self-study and use alongside lecture c...

  16. Compact Unequal Power Divider with Filtering Response

    Directory of Open Access Journals (Sweden)

    Wei-Qiang Pan

    2015-01-01

    Full Text Available We present a novel unequal power divider with bandpass responses. The proposed power divider consists of five resonators and a resistor. The power division ratio is controlled by altering the coupling strength among the resonators. The output ports have the characteristic impedance of 50 Ω and impedance transformers in classical Wilkinson power dividers are not required in this design. Use of resonators enables the filtering function of the power divider. Two transmission zeros are generated near the passband edges, resulting in quasielliptic bandpass responses. For validation, a 2 : 1 filtering power divider is implemented. The fabricated circuit size is 0.22 λg × 0.08 λg, featuring compact size for unequal filtering power dividers, which is suitable for the feeding networks of antenna arrays.

  17. Algorithms for Unequal-Arm Michelson Interferometers

    Science.gov (United States)

    Giampieri, Giacomo; Hellings, Ronald W.; Tinto, Massimo; Bender, Peter L.; Faller, James E.

    1994-01-01

    A method of data acquisition and data analysis is described in which the performance of Michelson-type interferometers with unequal arms can be made nearly the same as interferometers with equal arms. The method requires a separate readout of the relative phase in each arm, made by interfering the returning beam in each arm with a fraction of the outgoing beam.

  18. An Unequal Information Society: How Information Access Initiatives Contribute to the Construction of Inequality

    Science.gov (United States)

    Sanfilippo, Madelyn Rose

    2016-01-01

    Unequal access to information has significant social and political consequences, and is itself a consequence of sociotechnical systems born of social, cultural, economic, and institutional context. Information is unequally distributed both within and between communities. While many factors that shape information inequality shift subtly over time,…

  19. Design and analysis of unequal split Bagley power dividers

    Science.gov (United States)

    Abu-Alnadi, Omar; Dib, Nihad; Al-Shamaileh, Khair; Sheta, Abdelfattah

    2015-03-01

    In this article, we propose a general design procedure to develop unequal split Bagley power dividers (BPDs). Based on the mathematical approach carried out in the insight of simple circuit and transmission line theories, exact design equations for 3-way and 5-way BPDs are derived. Utilising the developed equations leads to power dividers with the ability of offering different output power ratios through a suitable choice of the characteristic impedances of the interconnecting transmission lines. For verification purposes, a 1:2:1 3-way, 1:2:1:2:1 5-way and 1:3:1:3:1 5-way BPDs are designed and fabricated. The experimental and full-wave simulation results prove the validity of the designed unequal split BPDs.

  20. K-harmonic solution for three bound unequal particles

    International Nuclear Information System (INIS)

    Coelho, H.T.; Consoni, L.; Vallieres, M.

    1978-01-01

    The three bound unequal particles problem using K-harmonics is analysed concerning how the nature of interactions and asymmetries of the system will affect convergence of the solutions. Coulomb interaction which gives closed expressions for the matrix elements of the potential in the method is discussed [pt

  1. The association between unequal parental treatment and the sibling relationship in Finland: The difference between full and half-siblings.

    Science.gov (United States)

    Danielsbacka, Mirkka; Tanskanen, Antti O

    2015-06-24

    Studies have shown that unequal parental treatment is associated with relationship quality between siblings. However, it is unclear how it affects the relationship between full and half-siblings. Using data from the Generational Transmissions in Finland project (n = 1,537 younger adults), we study whether those who have half-siblings perceive more unequal parental treatment than those who have full siblings only. In addition, we study how unequal parental treatment is associated with sibling relationship between full, maternal, and paternal half-siblings. First, we found that individuals who have maternal and/or paternal half-siblings are more likely to have encountered unequal maternal treatment than individuals who have full siblings only. Second, we found that unequal parental treatment impairs full as well as maternal and paternal half-sibling relations in adulthood. Third, unequal parental treatment mediates the effect of genetic relatedness on sibling relations in the case of maternal half-siblings, but not in the case of paternal half-siblings. After controlling for unequal parental treatment, the quality of maternal half-sibling relationships did not differ from that of full siblings, whereas the quality of paternal half-sibling relationships still did. Fourth, the qualitative comments (n = 206) from the same population reveal that unequal parental treatment presents itself several ways, such as differential financial, emotional, or practical support.

  2. The Association between Unequal Parental Treatment and the Sibling Relationship in Finland: The Difference between Full and Half-Siblings

    Directory of Open Access Journals (Sweden)

    Mirkka Danielsbacka

    2015-04-01

    Full Text Available Studies have shown that unequal parental treatment is associated with relationship quality between siblings. However, it is unclear how it affects the relationship between full and half-siblings. Using data from the Generational Transmissions in Finland project (n = 1,537 younger adults, we study whether those who have half-siblings perceive more unequal parental treatment than those who have full siblings only. In addition, we study how unequal parental treatment is associated with sibling relationship between full, maternal, and paternal half-siblings. First, we found that individuals who have maternal and/or paternal half-siblings are more likely to have encountered unequal maternal treatment than individuals who have full siblings only. Second, we found that unequal parental treatment impairs full as well as maternal and paternal half-sibling relations in adulthood. Third, unequal parental treatment mediates the effect of genetic relatedness on sibling relations in the case of maternal half-siblings, but not in the case of paternal half-siblings. After controlling for unequal parental treatment, the quality of maternal half-sibling relationships did not differ from that of full siblings, whereas the quality of paternal half-sibling relationships still did. Fourth, the qualitative comments (n = 206 from the same population reveal that unequal parental treatment presents itself several ways, such as differential financial, emotional, or practical support.

  3. Unequal-time correlators for cosmology

    Science.gov (United States)

    Kitching, T. D.; Heavens, A. F.

    2017-03-01

    Measurements of the power spectrum from large-scale structure surveys have, to date, assumed an equal-time approximation, where the full cross-correlation power spectrum of the matter density field evaluated at different times (or distances) has been approximated either by the power spectrum at a fixed time or in an improved fashion, by a geometric mean P (k ;r1,r2)=[P (k ;r1)P (k ;r2)]1 /2 . In this paper we investigate the expected impact of the geometric mean ansatz and present an application in assessing the impact on weak-gravitational-lensing cosmological parameter inference, using a perturbative unequal time correlator. As one might expect, we find that the impact of this assumption is greatest at large separations in redshift Δ z ≳0.3 where the change in the amplitude of the matter power spectrum can be as much as 10 percent for k ≳5 h ⁢ Mpc-1 . However, of more concern is that the corrections for small separations, where the clustering is not close to zero, may not be negligibly small. In particular, we find that for a Euclid- or LSST-like weak lensing experiment, the assumption of equal-time correlators may result in biased predictions of the cosmic shear power spectrum, and that the impact is strongly dependent on the amplitude of the intrinsic alignment signal. To compute unequal-time correlations to sufficient accuracy will require advances in either perturbation theory to high k modes or extensive use of simulations.

  4. A Two Unequal Fluids (TUF) model for thermalhydraulics analysis

    International Nuclear Information System (INIS)

    Bonalumi, R.A.; Liu, W.S.; Yousef, W.W.; Pascoe, J.

    1983-01-01

    TUF is an advanced two-phase flow computer code being developed at Ontario Hydro for analysis of thermalhydraulics transients in which the Homogeneous Equilibrium Model is not adequate, i.e., when the two phases (vapor and liquid) have Unequal Velocities (UV) and Unequal Temperatures (UT). The paper covers only one of the several development areas encompassed by TUF, namely its mathematical aspects. TUF's basic features include: numerical solution of mass-energy balance equations over fixed control volumes, semi-analytical solution of momentum equations at junctions (such that the solution is unconditionally stable and and has UV-UT choking and flooding limitations built-in). Two strategies are being developed: one based on the Porsching approach (for short-term use in an existing system code) and the other based on a two-step pressure field approach (computationally more efficient and unconditionally stable). Some simple test cases are presented

  5. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    Science.gov (United States)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  6. Recent study, but not retrieval, of knowledge protects against learning errors.

    Science.gov (United States)

    Mullet, Hillary G; Umanath, Sharda; Marsh, Elizabeth J

    2014-11-01

    Surprisingly, people incorporate errors into their knowledge bases even when they have the correct knowledge stored in memory (e.g., Fazio, Barber, Rajaram, Ornstein, & Marsh, 2013). We examined whether heightening the accessibility of correct knowledge would protect people from later reproducing misleading information that they encountered in fictional stories. In Experiment 1, participants studied a series of target general knowledge questions and their correct answers either a few minutes (high accessibility of knowledge) or 1 week (low accessibility of knowledge) before exposure to misleading story references. In Experiments 2a and 2b, participants instead retrieved the answers to the target general knowledge questions either a few minutes or 1 week before the rest of the experiment. Reading the relevant knowledge directly before the story-reading phase protected against reproduction of the misleading story answers on a later general knowledge test, but retrieving that same correct information did not. Retrieving stored knowledge from memory might actually enhance the encoding of relevant misinformation.

  7. Performance analysis of amplify-and-forward two-way relaying with co-channel interference and channel estimation error

    KAUST Repository

    Yang, Liang

    2013-04-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation error and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use these expressions to evaluate the outage probability and error probability. Numerical results show that the approximate closed-form expressions are very close to the exact ones. © 2013 IEEE.

  8. The Dugdale solution for two unequal straight cracks weakening

    Indian Academy of Sciences (India)

    A crack arrest model is proposed for an infinite elastic perfectly-plastic plate weakened by two unequal, quasi-static, collinear straight cracks. The Dugdale model solution is obtained for the above problem when the developed plastic zones are subjected to normal cohesive quadratically varying yield point stress. Employing ...

  9. Human error probability evaluation as part of reliability analysis of digital protection system of advanced pressurized water reactor - APR 1400

    International Nuclear Information System (INIS)

    Varde, P. V.; Lee, D. Y.; Han, J. B.

    2003-03-01

    A case of study on human reliability analysis has been performed as part of reliability analysis of digital protection system of the reactor automatically actuates the shutdown system of the reactor when demanded. However, the safety analysis takes credit for operator action as a diverse mean for tripping the reactor for, though a low probability, ATWS scenario. Based on the available information two cases, viz., human error in tripping the reactor and calibration error for instrumentations in protection system, have been analyzed. Wherever applicable a parametric study has also been performed

  10. 40 CFR 73.37 - Account error.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Account error. 73.37 Section 73.37 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole...

  11. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    Science.gov (United States)

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  12. The Theory of Unequal Exchange: The End of the Debate?

    NARCIS (Netherlands)

    R. Brown (Richard)

    1978-01-01

    textabstractThe overall objective of this work is to examine the theory of Unequal Exchange, the recent critiques of that, and its interrelation with questions concerning the effects and role of foreign investment in underdeveloped countries. Interest in this debate was stimulated largely by the

  13. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    Energy Technology Data Exchange (ETDEWEB)

    Moss, A.R.L

    2000-07-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of {+-} 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of {+-}0.1%T is required to give a UPF uncertainty of {+-}2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  14. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    International Nuclear Information System (INIS)

    Moss, A.R.L.

    2000-01-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  15. Random linear network coding for streams with unequally sized packets

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2016-01-01

    State of the art Random Linear Network Coding (RLNC) schemes assume that data streams generate packets with equal sizes. This is an assumption that results in the highest efficiency gains for RLNC. A typical solution for managing unequal packet sizes is to zero-pad the smallest packets. However, ...

  16. X-ray- and TEM-induced mitotic recombination in Drosophila melanogaster: Unequal and sister-strand recombination

    International Nuclear Information System (INIS)

    Becker, H.J.

    1975-01-01

    Twin mosaic spots of dark-apricot and light-apricot ommatidia were found in the eyes of wsup(a)/wsup(a) females, of wsup(a) males, of females homozygous for In(1)sc 4 , wsup(a) and of attached-X females homozygous for wsup(a). The flies were raised from larvae which had been treated with 1,630 R of X-rays at the age of 48-52 hours. An additional group of wsup(a)/wsup(a) females and wsup(a) males came from larvae that had been fed with triethylene melamine (TEM) at the age of 22-24 hours. The twin spots apparently were the result of induced unequal mitotic recombination, i.e. from unequal sister-strand recombination in the males and from unequal sister-strand recombination as well as, possibly, unequal recombination between homologous strands in the females. That is, a duplication resulted in wsup(a)Dpwsup(a)/wsup(a) dark-apricto ommatidia and the corresponding deficiency in an adjacent area of wsup(a)/Dfwsup(a) light-apricot ommatidia. In an additional experiment sister-strand mitotic recombination in the ring-X chromosome of ring-X/rod-X females heterozygous for w and wsup(co) is believed to be the cause for X-ray induced single mosaic spots that show the phenotype of the rod-X marker. (orig.) [de

  17. Wide brick tunnel randomization - an unequal allocation procedure that limits the imbalance in treatment totals.

    Science.gov (United States)

    Kuznetsova, Olga M; Tymofyeyev, Yevgen

    2014-04-30

    In open-label studies, partial predictability of permuted block randomization provides potential for selection bias. To lessen the selection bias in two-arm studies with equal allocation, a number of allocation procedures that limit the imbalance in treatment totals at a pre-specified level but do not require the exact balance at the ends of the blocks were developed. In studies with unequal allocation, however, the task of designing a randomization procedure that sets a pre-specified limit on imbalance in group totals is not resolved. Existing allocation procedures either do not preserve the allocation ratio at every allocation or do not include all allocation sequences that comply with the pre-specified imbalance threshold. Kuznetsova and Tymofyeyev described the brick tunnel randomization for studies with unequal allocation that preserves the allocation ratio at every step and, in the two-arm case, includes all sequences that satisfy the smallest possible imbalance threshold. This article introduces wide brick tunnel randomization for studies with unequal allocation that allows all allocation sequences with imbalance not exceeding any pre-specified threshold while preserving the allocation ratio at every step. In open-label studies, allowing a larger imbalance in treatment totals lowers selection bias because of the predictability of treatment assignments. The applications of the technique in two-arm and multi-arm open-label studies with unequal allocation are described. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    Science.gov (United States)

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  19. Error Control Techniques for Efficient Multicast Streaming in UMTS Networks: Proposals andPerformance Evaluation

    Directory of Open Access Journals (Sweden)

    Michele Rossi

    2004-06-01

    Full Text Available In this paper we introduce techniques for efficient multicast video streaming in UMTS networks where a video content has to be conveyed to multiple users in the same cell. Efficient multicast data delivery in UMTS is still an open issue. In particular, suitable solutions have to be found to cope with wireless channel errors, while maintaining both an acceptable channel utilization and a controlled delivery delay over the wireless link between the serving base station and the mobile terminals. Here, we first highlight that standard solutions such as unequal error protection (UEP of the video flow are ineffective in the UMTS systems due to its inherent large feedback delay at the link layer (Radio Link Control, RLC. Subsequently, we propose a local approach to solve errors directly at the UMTS link layer while keeping a reasonably high channel efficiency and saving, as much as possible, system resources. The solution that we propose in this paper is based on the usage of the common channel to serve all the interested users in a cell. In this way, we can save resources with respect to the case where multiple dedicated channels are allocated for every user. In addition to that, we present a hybrid ARQ (HARQ proactive protocol that, at the cost of some redundancy (added to the link layer flow, is able to consistently improve the channel efficiency with respect to the plain ARQ case, by therefore making the use of a single common channel for multicast data delivery feasible. In the last part of the paper we give some hints for future research, by envisioning the usage of the aforementioned error control protocols with suitably encoded video streams.

  20. Effect of unequal fuel and oxidizer Lewis numbers on flame dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shamim, Tariq [Department of Mechanical Engineering, The University of Michigan-Dearborn, Dearborn, MI 48128-1491 (United States)

    2006-12-15

    The interaction of non-unity Lewis number (due to preferential diffusion and/or unequal rates of heat and mass transfer) with the coupled effect of radiation, chemistry and unsteadiness alters several characteristics of a flame. The present study numerically investigates this interaction with a particular emphasis on the effect of unequal and non-unity fuel and oxidizer Lewis numbers in a transient diffusion flame. The unsteadiness is simulated by considering the flame subjected to modulations in reactant concentration. Flames with different Lewis numbers (ranging from 0.5 to 2) and subjected to different modulating frequencies are considered. The results show that the coupled effect of Lewis number and unsteadiness strongly influences the flame dynamics. The impact is stronger at high modulating frequencies and strain rates, particularly for large values of Lewis numbers. Compared to the oxidizer side Lewis number, the fuel side Lewis number has greater influence on flame dynamics. (author)

  1. Load bearing capacity of welded joints between dissimilar pipelines with unequal wall thickness

    Energy Technology Data Exchange (ETDEWEB)

    Beak, Jonghyun; Kim, Youngpyo; Kim, Woosik [Korea Gas Corporation, Suwon (Korea, Republic of)

    2012-09-15

    The behavior of the load bearing capacity of a pipeline with unequal wall thickness was evaluated using finite element analyses. Pipelines with a wall thickness ratio of 1.22-1.89 were adopted to investigate plastic collapse under tensile, internal pressure, or bending stress. A parametric study showed that the tensile strength and moment of a pipeline with a wall thickness ratio less than 1.5 were not influenced by the wall thickness ratio and taper angle; however, those of a pipeline with a wall thickness ratio more than 1.5 decreased considerably at a low taper angle. The failure pressure of a pipeline with unequal wall thickness was not influenced by the wall thickness ratio and taper angle.

  2. Ecologically unequal exchange, recessions, and climate change: A longitudinal study.

    Science.gov (United States)

    Huang, Xiaorui

    2018-07-01

    This study investigates how the ecologically unequal exchange of carbon dioxide emissions varies with economic recessions. I propose a country-specific approach to examine (1) the relationship between carbon dioxide emissions in developing countries and the "vertical flow" of exports to the United States; and (2) the variations of the relationship before, during, and after two recent economic recessions in 2001 and 2008. Using data on 69 developing nations between 2000 and 2010, I estimate time-series cross-sectional regression models with two-way fixed effects. Results suggest that the vertical flow of exports to the United States is positively associated with carbon dioxide emissions in developing countries. The magnitude of this relationship increased in 2001, 2009, and 2010, and decreased in 2008, but remained stable in non-recession periods, suggesting that economic recessions in the United States are associated with variations of ecologically unequal exchange. Results highlight the impacts of U.S. recessions on carbon emissions in developing countries through the structure of international trade. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Harmonic elimination technique for a single-phase multilevel converter with unequal DC link voltage levels

    DEFF Research Database (Denmark)

    Ghasemi, N.; Zare, F.; Boora, A.A.

    2012-01-01

    Multilevel converters, because of the benefits they attract in generating high quality output voltage, are used in several applications. Various modulation and control techniques are introduced by several researchers to control the output voltage of the multilevel converters like space vector...... modulation and harmonic elimination (HE) methods. Multilevel converters may have a DC link with equal or unequal DC voltages. In this study a new HE technique based on the HE method is proposed for multilevel converters with unequal DC link voltage. The DC link voltage levels are considered as additional...

  4. Multi-type Step-wise group screening designs with unequal A-priori ...

    African Journals Online (AJOL)

    ... design with unequal group sizes and obtain values of the group sizes that minimize the expected number of runs.. Keywords: Group Screening, Group factors, multi-type step-wise group screening, expected number of runs, Optimum group screening designs > East African Journal of Statistics Vol. 1 (1) 2005: pp. 49-67 ...

  5. Selections from Unequal Partners: Teaching about Power, Consent, and Healthy Relationships

    Science.gov (United States)

    deFur, Kirsten

    2016-01-01

    The Center for Sex Education recently published the fourth edition of "Unequal Partners: Teaching about Power, Consent, and Healthy Relationships, Volumes 1 and 2." Included here are two lesson plans about sexual consent selected from each volume. "What does it take … to give sexual consent?" [Sue Montfort and Peggy Brick] is…

  6. Direct fourier method reconstruction based on unequally spaced fast fourier transform

    International Nuclear Information System (INIS)

    Wu Xiaofeng; Zhao Ming; Liu Li

    2003-01-01

    First, We give an Unequally Spaced Fast Fourier Transform (USFFT) method, which is more exact and theoretically more comprehensible than its former counterpart. Then, with an interesting interpolation scheme, we discusse how to apply USFFT to Direct Fourier Method (DFM) reconstruction of parallel projection data. At last, an emulation experiment result is given. (authors)

  7. Content-Adaptive Packetization and Streaming of Wavelet Video over IP Networks

    Directory of Open Access Journals (Sweden)

    Chien-Peng Ho

    2007-03-01

    Full Text Available This paper presents a framework of content-adaptive packetization scheme for streaming of 3D wavelet-based video content over lossy IP networks. The tradeoff between rate and distortion is controlled by jointly adapting scalable source coding rate and level of forward error correction (FEC protection. A content dependent packetization mechanism with data-interleaving and Reed-Solomon protection for wavelet-based video codecs is proposed to provide unequal error protection. This paper also tries to answer an important question for scalable video streaming systems: given extra bandwidth, should one increase the level of channel protection for the most important packets, or transmit more scalable source data? Experimental results show that the proposed framework achieves good balance between quality of the received video and level of error protection under bandwidth-varying lossy IP networks.

  8. Combined group ECC protection and subgroup parity protection

    Science.gov (United States)

    Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin

    2013-06-18

    A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.

  9. The selective power of causality on memory errors.

    Science.gov (United States)

    Marsh, Jessecae K; Kulkofsky, Sarah

    2015-01-01

    We tested the influence of causal links on the production of memory errors in a misinformation paradigm. Participants studied a set of statements about a person, which were presented as either individual statements or pairs of causally linked statements. Participants were then provided with causally plausible and causally implausible misinformation. We hypothesised that studying information connected with causal links would promote representing information in a more abstract manner. As such, we predicted that causal information would not provide an overall protection against memory errors, but rather would preferentially help in the rejection of misinformation that was causally implausible, given the learned causal links. In two experiments, we measured whether the causal linkage of information would be generally protective against all memory errors or only selectively protective against certain types of memory errors. Causal links helped participants reject implausible memory lures, but did not protect against plausible lures. Our results suggest that causal information may promote an abstract storage of information that helps prevent only specific types of memory errors.

  10. High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices

    Science.gov (United States)

    Harrar, Solomon W.; Kong, Xiaoli

    2015-01-01

    In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861

  11. Numerical study of two side-by-side cylinders with unequal diameters at low Reynolds number

    International Nuclear Information System (INIS)

    Gao, Y Y; Wang, X K; Tan, S K

    2012-01-01

    Two-dimensional laminar flow about two side-by-side unequal cylinders with different diameter ratios d/D and centre-to-centre spacing ratios T/D at Re=300 (based on the larger cylinder diameter) was simulated using a CFD software. Comparisons of experimental and numerical results were made to elucidate the degree of interference due to d/D and T/D and their effects on the flow patterns and vortex shedding frequencies. The findings showed that the flow patterns behind two unequal cylinders were distinctly different from that behind two equal side-by-side cylinders, with distinct in-phase and anti-phase vortex shedding, and random switching of modes of vortex shedding.

  12. Modeling imperfectly repaired system data via grey differential equations with unequal-gapped times

    International Nuclear Information System (INIS)

    Guo Renkuan

    2007-01-01

    In this paper, we argue that grey differential equation models are useful in repairable system modeling. The arguments starts with the review on GM(1,1) model with equal- and unequal-spaced stopping time sequence. In terms of two-stage GM(1,1) filtering, system stopping time can be partitioned into system intrinsic function and repair effect. Furthermore, we propose an approach to use grey differential equation to specify a semi-statistical membership function for system intrinsic function times. Also, we engage an effort to use GM(1,N) model to model system stopping times and the associated operating covariates and propose an unequal-gapped GM(1,N) model for such analysis. Finally, we investigate the GM(1,1)-embed systematic grey equation system modeling of imperfectly repaired system operating data. Practical examples are given in step-by-step manner to illustrate the grey differential equation modeling of repairable system data

  13. ANALISIS PENGARUH KONFIGURASI EIGRP EQUAL DAN UNEQUAL COST LOAD BALANCING TERHADAP KINERJA ROUTER

    Directory of Open Access Journals (Sweden)

    Dian Bagus Saptonugroho

    2015-04-01

    Full Text Available Routing protocol is tasked with finding the best route to send the packet. Assessed using the metric. If there is more than one route with the same metric value, Routing Information Path (RIP, Open Shortest Path First (OSPF, and Enhanched Interior Gateway Routing Protocol (EIGRP support equal cost load balancing to send packets to the destination. If there is more than one route with a different metric values, EIGRP can do unequal cost load balancing. Research needs to be conducted to determine the effect of the configuration of EIGRP equal and unequal cost load balancing on the performance of the router which can be used as a proof-of-concept testing that is part of the project design document on a network. Research networks using EIGRP as the routing protocol. After the equal and unequal load balancing is enabled by configuring the variance, CEF, per-destination load balancing, per-packet load balancing, or traffic sharing and analyzing its effect on the neighbor table, topology table, routing table, the data transmission, survivability, convergence, throughput, and utilization. This study used an emulator GNS3 as Cisco 2691 Router with Cisco IOS version 12:24 (25 c and advanced enterprise-adventerprisek9 image c2691-mz.124-25c.bin, and OPNET Modeler 14.5 for simulation. The results of the study can be used as a proof-of-concept testing in the design document for later use as contemplated in the manufacture of plan implementation and verification plan.

  14. Snail family members unequally trigger EMT and thereby differ in their ability to promote the neoplastic transformation of mammary epithelial cells.

    Directory of Open Access Journals (Sweden)

    Baptiste Gras

    Full Text Available By fostering cell commitment to the epithelial-to-mesenchymal transition (EMT, SNAIL proteins endow cells with motility, thereby favoring the metastatic spread of tumor cells. Whether the phenotypic change additionally facilitates tumor initiation has never been addressed. Here we demonstrate that when a SNAIL protein is ectopically produced in non-transformed mammary epithelial cells, the cells are protected from anoikis and proliferate under low-adherence conditions: a hallmark of cancer cells. The three SNAIL proteins show unequal oncogenic potential, strictly correlating with their ability to promote EMT. SNAIL3 especially behaves as a poor EMT-inducer comforting the concept that the transcription factor functionally diverges from its two related proteins.

  15. The Theory of Exploitation as the Unequal Exchange of Labour

    OpenAIRE

    Veneziani, Roberto; Yoshihara, Naoki

    2016-01-01

    This paper analyses the normative and positive foundations of the theory of exploitation as the unequal exchange of labour (UEL). The key intuitions behind all of the main approaches to UEL exploitation are explicitly analysed as a series of formal claims in a general economic environment. It is then argued that these intuitions can be captured by one fundamental axiom - called Labour Exploitation - which defines the basic domain of all UEL exploitation forms and identifies the formal and the...

  16. The theory of exploitation as the unequal exchange of labour

    OpenAIRE

    Veneziani, Roberto; Yoshihara, Naoki

    2017-01-01

    This paper analyses the normative and positive foundations of the theory of exploitation as the unequal exchange of labour (UEL). The key intuitions behind all of the main approaches to UEL exploitation are explicitly analysed as a series of formal claims in a general economic environment. It is then argued that these intuitions can be captured by one fundamental axiom - called Labour Exploitation - which defines the basic domain of all UEL exploitation forms and identifies the formal and the...

  17. Globalization as Continuing Colonialism: Critical Global Citizenship Education in an Unequal World

    Science.gov (United States)

    Mikander, Pia

    2016-01-01

    In an unequal world, education about global inequality can be seen as a controversial but necessary topic for social science to deal with. Even though the world no longer consists of colonies and colonial powers, many aspects of the global economy follow the same patterns as during colonial times, with widening gaps between the world's richest and…

  18. [Work and health inequalities: The unequal distribution of exposures at work in Germany and Europe].

    Science.gov (United States)

    Dragano, Nico; Wahrendorf, Morten; Müller, Kathrin; Lunau, Thorsten

    2016-02-01

    Health inequalities in the working population may partly be due to the unequal exposure to work-related risk factors among different occupational positions. Empirical data, however, exploring the distribution of exposures at work according to occupational position for Germany is missing. This paper summarizes existing literature on occupational inequalities and discusses the role of working conditions. In addition, using European survey data, we study how various exposures at work vary by occupational class. Analyses are based on the European Working Condition Survey, and we compare the German sample (n = 2096) with the sample from the EU-27 countries (n = 34,529). To measure occupational position we use occupational class (EGP-classes). First, we describe the prevalence of 16 different exposures at work by occupational class for men and women. Second, we estimate regression models, and thereby investigate if associations between occupational class and self-perceived health are related to an unequal distribution of exposures at work. For various exposures at work we found a higher prevalence among manual workers and lower-skilled employees for both physical and psychosocial conditions. With few exceptions only, this finding was true for men and women and consistent for Germany and Europe. Results indicate that the unequal distribution of health-adverse conditions at work contribute towards existing health inequalities among the working population.

  19. Taxation and the unequal reach of the state: mapping state capacity in Ecuador

    NARCIS (Netherlands)

    Harbers, I.

    2015-01-01

    Even though the unequal reach of the state has become an important concern in the literature on developing democracies in Latin America, empirical measures of intracountry variation in state capacity are scarce. So far, attempts to develop valid measures of the reach of the state have often been

  20. Genesis by meiotic unequal crossover of a de novo deletion that contributes to steroid 21-hydroxylase deficiency

    International Nuclear Information System (INIS)

    Sinnott, P.; Collier, S.; Dyer, P.A.; Harris, R.; Strachan, T.; Costigan, C.

    1990-01-01

    The HLA-linked human steroid 21-hydroxylase gene CYP21B and its closely homologous pseudogene CYP21A are each normally located centromeric to a fourth component of complement (C4) gene, C4B and C4A, respectively, in an organization suggesting tandem duplication of a ca. 30-kilobase DNA unit containing a CYP21 gene and a C4 gene. Such an organization has been considered to facilitate gene deletion and addition events by unequal crossover between the tandem repeats. The authors have identified a steroid 21-hydroxylase deficiency patient who has a maternally inherited disease haplotype that carries a de novo deletion of a ca. 30-kilobase repeat unit including the CYP21B gene and associated C4B gene. This disease haplotype appears to have been generated as a result of meiotic unequal crossover between maternal homologous chromosomes. One of the maternal haplotypes is the frequently occurring HLA-DR3,B8,A1 haplotype that normally carries a deletion of a ca. 30-kilobase unit including the CYP21A gene and C4A gene. Haplotypes of this type may possible act as premutations, increasing the susceptibility of developing a 21-hydroxylase deficiency mutation by facilitating unequal chromosome pairing

  1. Humanitarianism and Unequal Exchange

    Directory of Open Access Journals (Sweden)

    Raja Swamy

    2017-08-01

    Full Text Available This article examines the relationship between humanitarian aid and ecologically unequal exchange in the context of post-disaster reconstruction. I assess the manner in which humanitarian aid became a central part of the reconstruction process in India's Tamil Nadu state following the devastating 2004 Indian Ocean tsunami. This article focuses on how the humanitarian “gift” of housing became a central plank of the state's efforts to push fishers inland while opening up coastal lands for various economic development projects such as ports, infrastructure, industries, and tourism. As part of the state and multilateral agency financed reconstruction process, the humanitarian aid regime provided “free” houses as gifts to recipients while expecting in return the formal abandonment of all claims to the coast. The humanitarian “gift” therefore helped depoliticize critical issues of land and resources, location and livelihood, which prior to the tsunami were subjects of long-standing political conflicts between local fisher populations and the state. The gift economy in effect played into an ongoing conflict over land and resources and effectively sought to ease the alienation of fishers from their coastal commons and near shore marine resource base. I argue that humanitarian aid, despite its associations with benevolence and generosity, presents a troubling and disempowering set of options for political struggles over land, resources, and social entitlements such as housing, thereby intensifying existing ecological and economic inequalities.

  2. Adaptive Image Transmission Scheme over Wavelet-Based OFDM System

    Institute of Scientific and Technical Information of China (English)

    GAOXinying; YUANDongfeng; ZHANGHaixia

    2005-01-01

    In this paper an adaptive image transmission scheme is proposed over Wavelet-based OFDM (WOFDM) system with Unequal error protection (UEP) by the design of non-uniform signal constellation in MLC. Two different data division schemes: byte-based and bitbased, are analyzed and compared. Different bits are protected unequally according to their different contribution to the image quality in bit-based data division scheme, which causes UEP combined with this scheme more powerful than that with byte-based scheme. Simulation results demonstrate that image transmission by UEP with bit-based data division scheme presents much higher PSNR values and surprisingly better image quality. Furthermore, by considering the tradeoff of complexity and BER performance, Haar wavelet with the shortest compactly supported filter length is the most suitable one among orthogonal Daubechies wavelet series in our proposed system.

  3. Performance Analysis of Amplify-and-Forward Two-Way Relaying with Co-Channel Interference and Channel Estimation Error

    KAUST Repository

    Liang Yang,

    2013-06-01

    In this paper, we consider the performance of a two-way amplify-and-forward relaying network (AF TWRN) in the presence of unequal power co-channel interferers (CCI). Specifically, we first consider AF TWRN with an interference-limited relay and two noisy-nodes with channel estimation errors and CCI. We derive the approximate signal-to-interference plus noise ratio expressions and then use them to evaluate the outage probability, error probability, and achievable rate. Subsequently, to investigate the joint effects of the channel estimation error and CCI on the system performance, we extend our analysis to a multiple-relay network and derive several asymptotic performance expressions. For comparison purposes, we also provide the analysis for the relay selection scheme under the total power constraint at the relays. For AF TWRN with channel estimation error and CCI, numerical results show that the performance of the relay selection scheme is not always better than that of the all-relay participating case. In particular, the relay selection scheme can improve the system performance in the case of high power levels at the sources and small powers at the relays.

  4. Sequential boundaries approach in clinical trials with unequal allocation ratios

    Directory of Open Access Journals (Sweden)

    Ayatollahi Seyyed

    2006-01-01

    Full Text Available Abstract Background In clinical trials, both unequal randomization design and sequential analyses have ethical and economic advantages. In the single-stage-design (SSD, however, if the sample size is not adjusted based on unequal randomization, the power of the trial will decrease, whereas with sequential analysis the power will always remain constant. Our aim was to compare sequential boundaries approach with the SSD when the allocation ratio (R was not equal. Methods We evaluated the influence of R, the ratio of the patients in experimental group to the standard group, on the statistical properties of two-sided tests, including the two-sided single triangular test (TT, double triangular test (DTT and SSD by multiple simulations. The average sample size numbers (ASNs and power (1-β were evaluated for all tests. Results Our simulation study showed that choosing R = 2 instead of R = 1 increases the sample size of SSD by 12% and the ASN of the TT and DTT by the same proportion. Moreover, when R = 2, compared to the adjusted SSD, using the TT or DTT allows to retrieve the well known reductions of ASN observed when R = 1, compared to SSD. In addition, when R = 2, compared to SSD, using the TT and DTT allows to obtain smaller reductions of ASN than when R = 1, but maintains the power of the test to its planned value. Conclusion This study indicates that when the allocation ratio is not equal among the treatment groups, sequential analysis could indeed serve as a compromise between ethicists, economists and statisticians.

  5. Unequal Exchange of Air Pollution and Economic Benefits Embodied in China's Exports.

    Science.gov (United States)

    Zhang, Wei; Wang, Feng; Hubacek, Klaus; Liu, Yu; Wang, Jinnan; Feng, Kuishuang; Jiang, Ling; Jiang, Hongqiang; Zhang, Bing; Bi, Jun

    2018-04-03

    As the world's factory, China has enjoyed huge economic benefits from international export but also suffered severe environmental consequences. Most studies investigating unequal environmental exchange associated with trade took China as a homogeneous entity ignoring considerable inequality and outsourcing of pollution within China. This paper traces the regional mismatch of export-induced economic benefits and environmental costs along national supply chains by using the latest multiregional input-output model and emission inventory for 2012. The results indicate that approximately 56% of the national GDP induced by exports has been received by developed coastal regions, while about 72% of air pollution embodied in national exports, measured as aggregated atmospheric pollutant equivalents (APE), has been mainly incurred by less developed central and western regions. For each yuan of export-induced GDP, developed regions only incurred 0.4-0.6 g APE emissions, whereas less developed regions from western or central China had to suffer 4-8 times the amount of emissions. This is due to poorer regions providing lower value added and higher emission-intensive inputs and having lower environmental standards and less efficient technologies. Our results may pave a way to mitigate the unequal relationship between developed and less developed regions from the perspective of environment-economy nexus.

  6. Ecological Unequal Exchange: International Trade and Uneven Utilization of Environmental Space in the World System

    Science.gov (United States)

    Rice, James

    2007-01-01

    We evaluate the argument that international trade influences disproportionate cross-national utilization of global renewable natural resources. Such uneven dynamics are relevant to the consideration of inequitable appropriation of environmental space in particular and processes of ecological unequal exchange more generally. Using OLS regression…

  7. Random Shift and XOR of Unequal-sized Packets (RaSOR) to Shave off Transmission Overhead

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Fitzek, Frank Hanns Paul

    2017-01-01

    We propose the design of a novel coding scheme of unequal-sized packets. Unlike the conventional wisdom that consists of brute-force zero-padding in Random Linear Network Coding (RLNC), we exploit this heterogeneity to shave off this trailing overhead and transmit considerably less coded packets....

  8. Unequal arm space-borne gravitational wave detectors

    International Nuclear Information System (INIS)

    Larson, Shane L.; Hellings, Ronald W.; Hiscock, William A.

    2002-01-01

    Unlike ground-based interferometric gravitational wave detectors, large space-based systems will not be rigid structures. When the end stations of the laser interferometer are freely flying spacecraft, the armlengths will change due to variations in the spacecraft positions along their orbital trajectories, so the precise equality of the arms that is required in a laboratory interferometer to cancel laser phase noise is not possible. However, using a method discovered by Tinto and Armstrong, a signal can be constructed in which laser phase noise exactly cancels out, even in an unequal arm interferometer. We examine the case where the ratio of the armlengths is a variable parameter, and compute the averaged gravitational wave transfer function as a function of that parameter. Example sensitivity curve calculations are presented for the expected design parameters of the proposed LISA interferometer, comparing it to a similar instrument with one arm shortened by a factor of 100, showing how the ratio of the armlengths will affect the overall sensitivity of the instrument

  9. The Political Economy of the Water Footprint: A Cross-National Analysis of Ecologically Unequal Exchange

    Directory of Open Access Journals (Sweden)

    Jared B. Fitzgerald

    2016-12-01

    Full Text Available Water scarcity is an important social and ecological issue that is becoming increasingly problematic with the onset of climate change. This study explores the extent to which water resources in developing countries are affected by the vertical flow of exports to high-income countries. In examining this question, the authors engage the sociological theory of ecologically unequal exchange, which argues that high-income countries are able to partially externalize the environmental costs of their consumption to lower-income countries. The authors use a relatively new and underutilized measure of water usage, the water footprint, which quantifies the amount of water used in the entire production process. Ordinary least squares (OLS and robust regression techniques are employed in the cross-national analysis of 138 countries. The results provide partial support of the propositions of ecologically unequal exchange theory. In particular, the results highlight the importance of structural position in the global economy for understanding the effects of trade on water resources.

  10. UEP LT Codes with Intermediate Feedback

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Popovski, Petar; Østergaard, Jan

    2013-01-01

    We analyze a class of rateless codes, called Luby transform (LT) codes with unequal error protection (UEP). We show that while these codes successfully provide UEP, there is a significant price in terms of redundancy in the lower prioritized segments. We propose a modification with a single inter...... intermediate feedback message. Our analysis shows a dramatic improvement on the decoding performance of the lower prioritized segment....

  11. Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation

    CERN Document Server

    AUTHOR|(CDS)2238544; Podzorny, Tomasz; Uythoven, Jan

    2018-01-01

    In this work, we present a concatenated repetition-polar coding scheme that is aimed at applications requiring highly unbalanced unequal bit-error protection, such as the Beam Interlock System of the Large Hadron Collider at CERN. Even though this concatenation scheme is simple, it reveals significant challenges that may be encountered when designing a concatenated scheme that uses a polar code as an inner code, such as error correlation and unusual decision log-likelihood ratio distributions. We explain and analyze these challenges and we propose two ways to overcome them.

  12. China, Japan, and the United States in World War II: The Relinquishment of Unequal Treaties in 1943

    Directory of Open Access Journals (Sweden)

    Xiaohua Ma

    2015-08-01

    Full Text Available This paper aims to examine how the United States transformed its foreign policy to promote China as an “equal state” in international politics during World War II, with focus on the process of the American relinquishment of its unequal treaties with China in 1943. In particular, it concentrates on analyzing the conflicts between the United States and Japan in the process of relinquishment. By examining the rivalry between the United States and Japan in the social warfare – propaganda – we can see that the relinquishment of the unequal treaties in 1943 not only marked a historical turning point in America’s China policy, but also had a great impact on the transformation of East Asian politics in World War II and its influence in the world politics.

  13. Chocolate and The Consumption of Forests: A Cross-National Examination of Ecologically Unequal Exchange in Cocoa Exports

    Directory of Open Access Journals (Sweden)

    Mark D. Noble

    2017-08-01

    Full Text Available This study explores the potential links between specialization in cocoa exports and deforestation in developing nations through the lens of ecologically unequal exchange. Although chocolate production was once considered to have only minimal impacts on forests, recent reports suggest damaging trends due to increased demand and changing cultivation strategies. I use two sets of regression analyses to show the increased impact of cocoa export concentration on deforestation over time for less-developed nations. Overall, the results confirm that cocoa exports are associated with deforestation in the most recent time period, and suggest that specialization in cocoa exports is an important form of ecologically unequal exchange, where the environmental costs of chocolate consumption in the Global North are externalized to nations in the Global South, further impairing possibilities for successful or sustainable development.

  14. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    Science.gov (United States)

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  15. Isolating Graphical Failure-Inducing Input for Privacy Protection in Error Reporting Systems

    Directory of Open Access Journals (Sweden)

    Matos João

    2016-04-01

    Full Text Available This work proposes a new privacy-enhancing system that minimizes the disclosure of information in error reports. Error reporting mechanisms are of the utmost importance to correct software bugs but, unfortunately, the transmission of an error report may reveal users’ private information. Some privacy-enhancing systems for error reporting have been presented in the past years, yet they rely on path condition analysis, which we show in this paper to be ineffective when it comes to graphical-based input. Knowing that numerous applications have graphical user interfaces (GUI, it is very important to overcome such limitation. This work describes a new privacy-enhancing error reporting system, based on a new input minimization algorithm called GUIᴍɪɴ that is geared towards GUI, to remove input that is unnecessary to reproduce the observed failure. Before deciding whether to submit the error report, the user is provided with a step-by-step graphical replay of the minimized input, to evaluate whether it still yields sensitive information. We also provide an open source implementation of the proposed system and evaluate it with well-known applications.

  16. Error Resilience in Current Distributed Video Coding Architectures

    Directory of Open Access Journals (Sweden)

    Tonoli Claudia

    2009-01-01

    Full Text Available In distributed video coding the signal prediction is shifted at the decoder side, giving therefore most of the computational complexity burden at the receiver. Moreover, since no prediction loop exists before transmission, an intrinsic robustness to transmission errors has been claimed. This work evaluates and compares the error resilience performance of two distributed video coding architectures. In particular, we have considered a video codec based on the Stanford architecture (DISCOVER codec and a video codec based on the PRISM architecture. Specifically, an accurate temporal and rate/distortion based evaluation of the effects of the transmission errors for both the considered DVC architectures has been performed and discussed. These approaches have been also compared with H.264/AVC, in both cases of no error protection, and simple FEC error protection. Our evaluations have highlighted in all cases a strong dependence of the behavior of the various codecs to the content of the considered video sequence. In particular, PRISM seems to be particularly well suited for low-motion sequences, whereas DISCOVER provides better performance in the other cases.

  17. Vibration energy harvesting using piezoelectric unimorph cantilevers with unequal piezoelectric and nonpiezoelectric lengths

    OpenAIRE

    Gao, Xiaotong; Shih, Wei-Heng; Shih, Wan Y.

    2010-01-01

    We have examined a piezoelectric unimorph cantilever (PUC) with unequal piezoelectric and nonpiezoelectric lengths for vibration energy harvesting theoretically by extending the analysis of a PUC with equal piezoelectric and nonpiezoelectric lengths. The theoretical approach was validated by experiments. A case study showed that for a fixed vibration frequency, the maximum open-circuit induced voltage which was important for charge storage for later use occurred with a PUC that had a nonpiezo...

  18. Notes on a Dramaturgical Analysis of Unequal Small-Scale Corruption Experiences

    Directory of Open Access Journals (Sweden)

    Edgar Daniel Manchinelly Mota

    2017-10-01

    Full Text Available In the last two decades, corruption has emerged as a relevant subject on a worldwide scale, because of its negative effects on the economy and State institutions, among other things. Research has focused on the macro aspects of corruption, emphasizing its causes and consequences. However, small-scale corruption has not been studied in such detail. This document proposes a theoretical-methodological framework for a dramaturgical analysis of small-scale corruption, with the aim of demonstrating that it is a stratified interaction. In this sense, corruption is an unequal experience for citizens, which depends on individuals’ social position.

  19. Scattering cross section of unequal length dipole arrays

    CERN Document Server

    Singh, Hema; Jha, Rakesh Mohan

    2016-01-01

    This book presents a detailed and systematic analytical treatment of scattering by an arbitrary dipole array configuration with unequal-length dipoles, different inter-element spacing and load impedance. It provides a physical interpretation of the scattering phenomena within the phased array system. The antenna radar cross section (RCS) depends on the field scattered by the antenna towards the receiver. It has two components, viz. structural RCS and antenna mode RCS. The latter component dominates the former, especially if the antenna is mounted on a low observable platform. The reduction in the scattering due to the presence of antennas on the surface is one of the concerns towards stealth technology. In order to achieve this objective, a detailed and accurate analysis of antenna mode scattering is required. In practical phased array, one cannot ignore the finite dimensions of antenna elements, coupling effect and the role of feed network while estimating the antenna RCS. This book presents the RCS estimati...

  20. Understanding determinants of unequal distribution of stillbirth in Tehran, Iran: a concentration index decomposition approach.

    Science.gov (United States)

    Almasi-Hashiani, Amir; Sepidarkish, Mahdi; Safiri, Saeid; Khedmati Morasae, Esmaeil; Shadi, Yahya; Omani-Samani, Reza

    2017-05-17

    The present inquiry set to determine the economic inequality in history of stillbirth and understanding determinants of unequal distribution of stillbirth in Tehran, Iran. A population-based cross-sectional study was conducted on 5170 pregnancies in Tehran, Iran, since 2015. Principal component analysis (PCA) was applied to measure the asset-based economic status. Concentration index was used to measure socioeconomic inequality in stillbirth and then decomposed into its determinants. The concentration index and its 95% CI for stillbirth was -0.121 (-0.235 to -0.002). Decomposition of the concentration index showed that mother's education (50%), mother's occupation (30%), economic status (26%) and father's age (12%) had the highest positive contributions to measured inequality in stillbirth history in Tehran. Mother's age (17%) had the highest negative contribution to inequality. Stillbirth is unequally distributed among Iranian women and is mostly concentrated among low economic status people. Mother-related factors had the highest positive and negative contributions to inequality, highlighting specific interventions for mothers to redress inequality. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Merger of binary neutron stars of unequal mass in full general relativity

    International Nuclear Information System (INIS)

    Shibata, Masaru; Taniguchi, Keisuke; Uryu-bar, Ko-barji

    2003-01-01

    We present results of three dimensional numerical simulations of the merger of unequal-mass binary neutron stars in full general relativity. A Γ-law equation of state P=(Γ-1)ρε is adopted, where P, ρ, ε, and Γ are the pressure, rest mass density, specific internal energy, and the adiabatic constant, respectively. We take Γ=2 and the baryon rest-mass ratio Q M to be in the range 0.85-1. The typical grid size is (633,633,317) for (x,y,z). We improve several implementations since the latest work. In the present code, the radiation reaction of gravitational waves is taken into account with a good accuracy. This fact enables us to follow the coalescence all the way from the late inspiral phase through the merger phase for which the transition is triggered by the radiation reaction. It is found that if the total rest mass of the system is more than ∼1.7 times of the maximum allowed rest mass of spherical neutron stars, a black hole is formed after the merger, irrespective of the mass ratios. The gravitational waveforms and outcomes in the merger of unequal-mass binaries are compared with those in equal-mass binaries. It is found that the disk mass around the so formed black holes increases with decreasing rest-mass ratios and decreases with increasing compactness of neutron stars. The merger process and the gravitational waveforms also depend strongly on the rest-mass ratios even for the range Q M =0.85-1

  2. Physical implementation of protected qubits

    International Nuclear Information System (INIS)

    Douçot, B; Ioffe, L B

    2012-01-01

    We review the general notion of topological protection of quantum states in spin models and its relation with the ideas of quantum error correction. We show that topological protection can be viewed as a Hamiltonian realization of error correction: for a quantum code for which the minimal number of errors that remain undetected is N, the corresponding Hamiltonian model of the effects of the environment noise appears only in the Nth order of the perturbation theory. We discuss the simplest model Hamiltonians that realize topological protection and their implementation in superconducting arrays. We focus on two dual realizations: in one the protected state is stored in the parity of the Cooper pair number, in the other, in the parity of the flux number. In both cases the superconducting arrays allow a number of fault-tolerant operations that should make the universal quantum computation possible. (key issues reviews)

  3. Physical implementation of protected qubits

    Science.gov (United States)

    Douçot, B.; Ioffe, L. B.

    2012-07-01

    We review the general notion of topological protection of quantum states in spin models and its relation with the ideas of quantum error correction. We show that topological protection can be viewed as a Hamiltonian realization of error correction: for a quantum code for which the minimal number of errors that remain undetected is N, the corresponding Hamiltonian model of the effects of the environment noise appears only in the Nth order of the perturbation theory. We discuss the simplest model Hamiltonians that realize topological protection and their implementation in superconducting arrays. We focus on two dual realizations: in one the protected state is stored in the parity of the Cooper pair number, in the other, in the parity of the flux number. In both cases the superconducting arrays allow a number of fault-tolerant operations that should make the universal quantum computation possible.

  4. Error-transparent evolution: the ability of multi-body interactions to bypass decoherence

    International Nuclear Information System (INIS)

    Vy, Os; Jacobs, Kurt; Wang Xiaoting

    2013-01-01

    We observe that multi-body interactions, unlike two-body interactions, can implement any unitary operation on an encoded system in such a way that the evolution is uninterrupted by noise that the encoding is designed to protect against. Such ‘error-transparent’ evolution is distinct from that usually considered in quantum computing, as the latter is merely correctable. We prove that the minimum body-ness required to protect (i) a qubit from a single type of Pauli error, (ii) a target qubit from a controller with such errors and (iii) a single qubit from all errors is three-body, four-body and five-body, respectively. We also discuss applications to computing, coherent feedback control and quantum metrology. Finally, we evaluate the performance of error-transparent evolution for some examples using numerical simulations. (paper)

  5. THE EFFECT OF UNEQUAL DISTRIBUTION OF THE STANDARD AND QUALITY OF

    Directory of Open Access Journals (Sweden)

    JELENA TOSKOVIC

    2015-10-01

    Full Text Available In the early 1990s the Western Balkan countries entered the transition process which involved the transition from a centrally-planned to a market economy. Thus, these countries were forced to adhere to the basic neoliberal principles based on the Washington consensus, which has promoted the liberalization, privatization and stabilization. This model, in all transition countries, and thus the countries of the Western Balkans (Albania, Bosnia and Herzegovina, FYR Macedonia, Montenegro, Serbia, was governed by the same guidelines. However, the consequence occurred but economic and social crisis that led to decades of growth of economic inequality. Deformation of the unequal distribution led to the economic and social stratification which is disabled out of the economic crisis in which these countries found. In this paper we analyzed indicators Gini coefficient, the standard of living and quality of life that affect the life of the inhabitants of the Western Balkans. Gini coefficient as the most commonly used measure for measuring economic inequality, measured the distribution of income or consumption expenditure among households or individuals who within an economy deviates from the even distribution. The standard of living is determined by the totality of the conditions of life and work of the individual layers of the population of a country in a given time period.It is related to quality of life and is used for comparative reviews and comparisons of geographic areas, a number of vacation days, and then to compare different periods in history and more. Quality of life is intangible thesis, which includes the assessment of quality of life factors such as employment, income, health, education, science, energy, knowledge and technology, the environment, human rights, protection and recreation, infrastructure, national security, public safety, etc. All these factors combine to affect the life of the population, which from the beginning of the

  6. A Slicing Tree Representation and QCP-Model-Based Heuristic Algorithm for the Unequal-Area Block Facility Layout Problem

    Directory of Open Access Journals (Sweden)

    Mei-Shiang Chang

    2013-01-01

    Full Text Available The facility layout problem is a typical combinational optimization problem. In this research, a slicing tree representation and a quadratically constrained program model are combined with harmony search to develop a heuristic method for solving the unequal-area block layout problem. Because of characteristics of slicing tree structure, we propose a regional structure of harmony memory to memorize facility layout solutions and two kinds of harmony improvisation to enhance global search ability of the proposed heuristic method. The proposed harmony search based heuristic is tested on 10 well-known unequal-area facility layout problems from the literature. The results are compared with the previously best-known solutions obtained by genetic algorithm, tabu search, and ant system as well as exact methods. For problems O7, O9, vC10Ra, M11*, and Nug12, new best solutions are found. For other problems, the proposed approach can find solutions that are very similar to previous best-known solutions.

  7. 19 CFR 173.1 - Authority to review for error.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Authority to review for error. 173.1 Section 173.1 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) ADMINISTRATIVE REVIEW IN GENERAL § 173.1 Authority to review for error. Port directors...

  8. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  9. Entanglement renormalization, quantum error correction, and bulk causality

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-04-07

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  10. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    Directory of Open Access Journals (Sweden)

    Silvia de Haan-Rietdijk

    2017-10-01

    Full Text Available The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1 and VAR(1 models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (VAR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  11. Cross-Country Variation in Adult Skills Inequality: Why Are Skill Levels and Opportunities so Unequal in Anglophone Countries?

    Science.gov (United States)

    Green, Andy; Green, Francis; Pensiero, Nicola

    2015-01-01

    This article examines cross-country variations in adult skills inequality and asks why skills in Anglophone countries are so unequal. Drawing on the Organization for Economic Cooperation and Development's recent Survey of Adult Skills and other surveys, it investigates the differences across countries and country groups in inequality in both…

  12. The interaction of human population, food production, and biodiversity protection.

    Science.gov (United States)

    Crist, Eileen; Mora, Camilo; Engelman, Robert

    2017-04-21

    Research suggests that the scale of human population and the current pace of its growth contribute substantially to the loss of biological diversity. Although technological change and unequal consumption inextricably mingle with demographic impacts on the environment, the needs of all human beings-especially for food-imply that projected population growth will undermine protection of the natural world. Numerous solutions have been proposed to boost food production while protecting biodiversity, but alone these proposals are unlikely to staunch biodiversity loss. An important approach to sustaining biodiversity and human well-being is through actions that can slow and eventually reverse population growth: investing in universal access to reproductive health services and contraceptive technologies, advancing women's education, and achieving gender equality. Copyright © 2017, American Association for the Advancement of Science.

  13. Unequal Marriages within the Russian Imperial Home and the 1911 Meeting of the Grand Dukes

    Directory of Open Access Journals (Sweden)

    Stanislav V. Dumin

    2013-12-01

    Full Text Available In celebrating the 400th anniversary of the Russian Imperial Household in 2013, it is important to remember that the historic dynasty did not disappear in 1917 and, as with the earlier ruling dynasties, it still retains its status and its structure based on the Law on Succession and the Provision on the Imperial Family. These documents, in part, define that the dynasty includes just the Romanov descendants born from marital unions with the ruling or previously ruling dynasties. The rest of the Romanov descendants that were born from unequal, morganatic marriages, did not belong to the Russian Imperial Family and, correspondingly, did not have the right to the throne. Until 1911, these marriages were simply forbidden for all members of the dynasty. In mentioning representatives of the Romanov family, popular literature and the media often do not mention this circumstance and instead include individuals who were not part of the dynasty, even though they were descended from the Russian Emperors through the paternal or the maternal line. Representatives of the so-called “Association of the Romanov Household”, descendants of Grand Dukes or Dukes of Imperial Blood from unequal marriages, would often point to a decree made by Nicholas II in 1911. The decree stated that the lesser Romanovs, dukes and duchesses of imperial blood, that is, great grandchildren and more distant descendants of Emperors, could enter into unequal marriages with Royal Permission. However, the theory stating that this decree somehow still gave these descendants dynastic rights is refuted through the materials that we have uncovered in the State Archives of the Russian Federation detailing the meeting of the Grand Dukes, called together by the order of Nicholas II and the resolution of the Emperor at the end of this meeting that is published in the article. As such, out of all the Romanov descendants still alive today, the status of being a true member of the dynasty only

  14. Statistical Power and Optimum Sample Allocation Ratio for Treatment and Control Having Unequal Costs Per Unit of Randomization

    Science.gov (United States)

    Liu, Xiaofeng

    2003-01-01

    This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…

  15. Material conditions of kindergartens as producers of experiences: uses of diversity and unequal relationships

    Directory of Open Access Journals (Sweden)

    Lucía Petrelli

    2017-09-01

    Full Text Available This article presents preliminary results of socio-anthropological research that took place in Early Education schools of the City of Buenos Aires. It focuses on the uses of sociocultural diversity and unequal relationships and their articulation with the materiality of everyday life conditions of the institutions in which fieldwork took place. It includes the analytical description of a social situation —the visit of an officer of the Ministry of Education to one of these Initial Schools in which research was carried on. This descriptions show the ways in which the different institutional subjects —teachers, parents, school directors, the staff member— refer to and produce the spaces in which their practices take place, and simultaneously, struggle for their places and put  under strain their diverse and unequal relationships. Afterwards, taking into account the Initial Education schools of a whole Educational District of the City, we analyze the features of the “educational offer” for Initial Schooling, the incorporation of devices  as cell phones  to everyday educational work of the kindergartens, as an aide to the teacher’s work, and the beginnings of the online school enrolment. These issues permit us to highlight the recent history of Argentina´s Initial Level teaching. The article accounts  that materiality, as the ethnographic descriptions evidence, permits to understand how, nowadays, relations of diversity and inequality come together.

  16. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    Science.gov (United States)

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Experimental quantum error correction with high fidelity

    International Nuclear Information System (INIS)

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-01-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ε to ∼ε 2 . In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  18. Differences between Inequalities and Unequal Exchange: Comments on the Papers by Chaves and Köhler

    OpenAIRE

    Raffer, Kunibert

    2006-01-01

    Köhler's critique of global wages, where he presents the concept of productivity with great clarity, combines very well with Chaves' presentation of Köhler's model of Unequal Exchange (UE). A brief and solid common position emerges. As I wrote that "the dimension of non-equivalence in a strict, logical sense" can only be shown by comparing real wages, I fully second Köhler's use of Purchase Power Parity (PPP)-data. In the 1980s, I explicitly referred to the research on PPP comparisons. Theref...

  19. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. The Capability Threshold: Re-examining the Definition of the Middle Class in an Unequal Developing Country

    OpenAIRE

    Burger, Ronelle; McAravey, Camren; van der Berg, Servaas

    2015-01-01

    In a polarised and highly unequal country such as South Africa, it is unlikely that a definition of the middle class that is based on an income threshold will adequately capture the political and social meanings of being middle class. We therefore propose a multi-dimensional definition, rooted in the ideas of empowerment and capability, and find that the 'empowered middle class' has expanded significantly since 1993 also across vulnerable subgroups such as blacks, female-headed households and...

  1. The "Hamburger Connection" as Ecologically Unequal Exchange: A Cross-National Investigation of Beef Exports and Deforestation in Less-Developed Countries

    Science.gov (United States)

    Austin, Kelly

    2010-01-01

    This study explores Norman Myers's concept of the "hamburger connection" as a form of ecologically unequal exchange, where more-developed nations are able to transfer the environmental costs of beef consumption to less-developed nations. I used ordinary least squares (OLS) regression to test whether deforestation in less-developed…

  2. The 1996 European Directive and radiation protection at CERN, or why 15 plus 4 is unequal to 19

    International Nuclear Information System (INIS)

    Hoefert, M.

    1997-04-01

    The recommendations of the 1996 EU Directive on radiation protection are compared with the practice at CERN as laid down in the 1996 Radiation Safety Manual which is largely based on the Swiss Radiation Protection Ordinance of 1994. The three topics discussed are individual dosimetry for persons exposed in the exercise of their profession, exemption values and clearance levels for radioactivity and committed effective dose coefficients, and reference levels for members of the public. (author)

  3. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  4. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    Science.gov (United States)

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  5. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  6. A Low-Complexity UEP Methodology Demonstrated on a Turbo-Encoded Wavelet Image Satellite Downlink

    Directory of Open Access Journals (Sweden)

    Salemi Eric

    2008-01-01

    Full Text Available Realizing high-quality digital image transmission via a satellite link, while optimizing resource distribution and minimizing battery consumption, is a challenging task. This paper describes a methodology to optimize a turbo-encoded wavelet-based satellite downlink progressive image transmission system with unequal error protection (UEP techniques. To achieve that goal, we instantiate a generic UEP methodology onto the system, and demonstrate that the proposed solution has little impact on the average performance, while greatly reducing the run-time complexity. Based on a simple design-time distortion model and a low-complexity run-time algorithm, the provided solution can dynamically tune the system's configuration to any bitrate constraint or channel condition. The resulting system outperforms in terms of peak signal-to-noise ratio (PSNR, a state-of-the-art, fine-tuned equal error protection (EEP solution by as much as 2 dB.

  7. A Low-Complexity UEP Methodology Demonstrated on a Turbo-Encoded Wavelet Image Satellite Downlink

    Directory of Open Access Journals (Sweden)

    Eric Salemi

    2008-01-01

    Full Text Available Realizing high-quality digital image transmission via a satellite link, while optimizing resource distribution and minimizing battery consumption, is a challenging task. This paper describes a methodology to optimize a turbo-encoded wavelet-based satellite downlink progressive image transmission system with unequal error protection (UEP techniques. To achieve that goal, we instantiate a generic UEP methodology onto the system, and demonstrate that the proposed solution has little impact on the average performance, while greatly reducing the run-time complexity. Based on a simple design-time distortion model and a low-complexity run-time algorithm, the provided solution can dynamically tune the system's configuration to any bitrate constraint or channel condition. The resulting system outperforms in terms of peak signal-to-noise ratio (PSNR, a state-of-the-art, fine-tuned equal error protection (EEP solution by as much as 2 dB.

  8. CLIM : A cross-level workload-aware timing error prediction model for functional units

    NARCIS (Netherlands)

    Jiao, Xun; Rahimi, Abbas; Jiang, Yu; Wang, Jianguo; Fatemi, Hamed; De Gyvez, Jose Pineda; Gupta, Rajesh K.

    2018-01-01

    Timing errors that are caused by the timing violations of sensitized circuit paths, have emerged as an important threat to the reliability of synchronous digital circuits. To protect circuits from these timing errors, designers typically use a conservative timing margin, which leads to operational

  9. Pharyngitis – fatal infectious disease or medical error?

    Directory of Open Access Journals (Sweden)

    Marta Rorat

    2015-08-01

    Full Text Available Reporting on adverse events is essential to create a culture of safety, which focuses on protecting doctors and patients from medical errors. We present a fatal case of Streptococcus C pharyngitis in a 56-year-old man. The clinical course and the results of additional diagnostics and autopsy showed that sepsis followed by multiple organ failure was the ultimate cause of death. The clinical course appeared fatal due to a chain of adverse events, including errors made by the physicians caring for the patient for 10 days.

  10. Design Of Photovoltaic Powered Cathodic Protection System

    Directory of Open Access Journals (Sweden)

    Golina Samir Adly

    2017-07-01

    Full Text Available The corrosion caused by chemical reaction between metallic structures and surrounding mediums such as soil or water .the CP cathodic protection system is used to protect metallic structure against corrosion. Cathodic protection CP used to minimize corrosion by utilizing an external source of electrical current which forces the entire structure to become a cathode. There are two Types of cathodic protection system Galvanic current Impressed current.the Galvanic current is called a sacrificial anode is connected to the protected structure cathode through a DC power supply. In Galvanic current system a current passes from the sacrificing anode to the protected structure .the sacrificial anode is corroded rather than causing the protected structure corrosion .protected structure requires a constant current to stop the corrosion which determined by area structure metal and the surrounding medium. The rains humidity are decrease soil resistivity and increase the DC current .The corrosion and over protection resulting from increase in the DC current is harmful for the metallic structure. This problem can be solved by conventional cathodic protection system by manual adjustment of DC voltage periodically to obtain a constant current .the manual adjustment of DC voltage depends on experience of the technician and using the accuracy of the measuring equipment. The errors of measuring current depend on error from the technician or error from the measuring equipment. the corrosion of structure may occur when the interval between two successive adjustment is long .An automatically regulated cathodic protection system is used to overcome problems from conventional cathodic protection system .the regulated cathodic protection system adjust the DC voltage of the system automatically when it senses the variations of surrounding medium resistivity so the DC current is constant at the required level.

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Sajid Nazir

    2012-01-01

    Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.

  13. Novel UEP LT Coding Scheme with Feedback Based on Different Degree Distributions

    Directory of Open Access Journals (Sweden)

    Li Ya-Fang

    2016-01-01

    Full Text Available Traditional unequal error protection (UEP schemes have some limitations and problems, such as the poor UEP performance of high priority data and the seriously sacrifice of low priority data in decoding property. Based on the reasonable applications of different degree distributions in LT codes, this paper puts forward a novel UEP LT coding scheme with a simple feedback to compile these data packets separately. Simulation results show that the proposed scheme can effectively protect high priority data, and improve the transmission efficiency of low priority data from 2.9% to 22.3%. Furthermore, it is fairly suitable to apply this novel scheme to multicast and broadcast environments since only a simple feedback introduced.

  14. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  16. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  18. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    Science.gov (United States)

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved

  19. [Maternal death: unequal risks].

    Science.gov (United States)

    Defossez, A C; Fassin, D

    1989-01-01

    rates include political, geographic, and economic mechanisms of exclusion which affect the vast majority of the population in developing countries. Political power is concentrated in the hands of relatively small groups whose decisions about such expenditures as health care are usually more favorable to the privileged. A consequence of the very unequal regional development in most Third World countries is that health, educational, and most other resources are concentrated in large cities and perhaps 1 or 2 strategic regions, leaving most of the population underserved. The low social position of women leaves them doubly vulnerable. The social factors adding to risks of maternal mortality should be considered in programs of prevention if the causes and not just the consequences are to be addressed.

  20. Agricultural injuries in Korea and errors in systems of safety

    Directory of Open Access Journals (Sweden)

    Hyocher Kim

    2016-07-01

    It was found that most agricultural injuries were caused by a complex layer of root causes which were classified as errors in the systems of safety. This result indicates that not only training and personal protective equipment, but also regulation of safety design, mitigation devices, inspection/maintenance of workplaces, and other factors play an important role in preventing agricultural injuries. The identification of errors will help farmers to implement easily an effective prevention programme.

  1. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.

    Science.gov (United States)

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-06-17

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.

  2. Burnout, engagement and resident physicians' self-reported errors.

    Science.gov (United States)

    Prins, J T; van der Heijden, F M M A; Hoekstra-Weebers, J E H M; Bakker, A B; van de Wiel, H B M; Jacobs, B; Gazendam-Donofrio, S M

    2009-12-01

    Burnout is a work-related syndrome that may negatively affect more than just the resident physician. On the other hand, engagement has been shown to protect employees; it may also positively affect the patient care that the residents provide. Little is known about the relationship between residents' self-reported errors and burnout and engagement. In our national study that included all residents and physicians in The Netherlands, 2115 questionnaires were returned (response rate 41.1%). The residents reported on burnout (Maslach Burnout Inventory-Health and Social Services), engagement (Utrecht Work Engagement Scale) and self-assessed patient care practices (six items, two factors: errors in action/judgment, errors due to lack of time). Ninety-four percent of the residents reported making one or more mistake without negative consequences for the patient during their training. Seventy-one percent reported performing procedures for which they did not feel properly trained. More than half (56%) of the residents stated they had made a mistake with a negative consequence. Seventy-six percent felt they had fallen short in the quality of care they provided on at least one occasion. Men reported more errors in action/judgment than women. Significant effects of specialty and clinical setting were found on both types of errors. Residents with burnout reported significantly more errors (p engaged residents reported fewer errors (p burnout and to keep residents engaged in their work.

  3. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  4. Graceful Degradation in 3GPP MBMS Mobile TV Services Using H.264/AVC Temporal Scalability

    Directory of Open Access Journals (Sweden)

    Thomas Wiegand

    2009-01-01

    Full Text Available These days, there is an increasing interest in Mobile TV broadcast services shown by customers as well as service providers. One general problem of Mobile TV broadcast services is to maximize the coverage of users receiving an acceptable service quality, which is mainly influenced by the user's position and mobility within the cell. In this paper, graceful degradation is considered as an approach for improved service availability and coverage. We present a layered transmission approach for 3GPP's Release 6 Multimedia and Broadcast Service (MBMS based on temporal scalability using H.264/AVC Baseline Profile. A differentiation in robustness between temporal quality layers is achieved by unequal error protection approach based on either application layer Forward Error Correction (FEC or unequal transmit power for the layers or even a combination of both. We discuss the corresponding MBMS service as well as network settings and define measures allowing for evaluating the amount of users reached with a certain mobile terminal play-out quality while considering the network cell capacity usage. Using simulated 3GPP Rel. 6 network conditions, we show that if the service and network settings are chosen carefully, a noticeable extension of the coverage of the MBMS service can be achieved.

  5. Radiation protection: A correction

    International Nuclear Information System (INIS)

    1972-01-01

    An error in translation inadvertently distorted the sense of a paragraph in the article entitled 'Ecological Aspects of Radiation Protection', by Dr. P. Recht, which appeared in the Bulletin, Volume 14, No. 2 earlier this year. In the English text the error appears on Page 28, second paragraph, which reads, as published: 'An instance familiar to radiation protection specialists, which has since come to be regarded as a classic illustration of this approach, is the accidental release at the Windscale nuclear centre in the north of England.' In the French original of this text no reference was made, or intended, to the accidental release which took place in 1957; the reference was to the study of the critical population group exposed to routine releases from the centre, as the footnote made clear. A more correct translation of the relevant sentence reads: 'A classic example of this approach, well-known to radiation protection specialists, is that of releases from the Windscale nuclear centre, in the north of England.' A second error appeared in the footnote already referred to. In all languages, the critical population group studied in respect of the Windscale releases is named as that of Cornwall; the reference should be, of course, to that part of the population of Wales who eat laver bread. (author)

  6. Efficient detection of dangling pointer error for C/C++ programs

    Science.gov (United States)

    Zhang, Wenzhe

    2017-08-01

    Dangling pointer error is pervasive in C/C++ programs and it is very hard to detect. This paper introduces an efficient detector to detect dangling pointer error in C/C++ programs. By selectively leave some memory accesses unmonitored, our method could reduce the memory monitoring overhead and thus achieves better performance over previous methods. Experiments show that our method could achieve an average speed up of 9% over previous compiler instrumentation based method and more than 50% over previous page protection based method.

  7. Binary palmprint representation for feature template protection

    NARCIS (Netherlands)

    Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2012-01-01

    The major challenge of biometric template protection comes from the intraclass variations of biometric data. The helper data scheme aims to solve this problem by employing the Error Correction Codes (ECC). However, many reported biometric binary features from the same user reach bit error rate (BER)

  8. Multi-bits error detection and fast recovery in RISC cores

    International Nuclear Information System (INIS)

    Wang Jing; Yang Xing; Zhang Weigong; Shen Jiao; Qiu Keni; Zhao Yuanfu

    2015-01-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap. (paper)

  9. Multi-bits error detection and fast recovery in RISC cores

    Science.gov (United States)

    Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu

    2015-11-01

    The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.

  10. Automated reactor protection testing saves time and avoids errors

    International Nuclear Information System (INIS)

    Raimondo, E.

    1990-01-01

    When the Pressurized Water Reactor units in the French 900MWe series were designed, the instrumentation and control systems were equipped for manual periodic testing. Manual reactor protection system testing has since been successfully replaced by an automatic system, which is also applicable to other instrumentation testing. A study on the complete automation of process instrumentation testing has been carried out. (author)

  11. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. The protection of the Cyrillic alphabet in telecommunications: Taxation aspects

    Directory of Open Access Journals (Sweden)

    Marilović Đorđe

    2016-01-01

    Full Text Available The use of Cyrillic and other specific alphabets is discouraged in some telecommunication services. In this paper; the author focuses on unequal treatment of the Cyrillic alphabet in telecommunications (in SMS messages, which is incompatible with the interests of a multilingual society to cherish its linguistic heritage and diversity. Referring to the Convention on the Protection and Promotion of the Diversity of Cultural Expression (2005, the author suggests introducing measures which would lead to removing the discriminatory pricing of Cyrillic SMS messages; and introducing tax measures which would support mobile network operators and prevent possible market inequalities stemming from introducing these measures. The suggested solution is applicable to any multicultural society facing the same problem; regardless of languages in question.

  13. Recognition Memory zROC Slopes for Items with Correct versus Incorrect Source Decisions Discriminate the Dual Process and Unequal Variance Signal Detection Models

    Science.gov (United States)

    Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.

    2014-01-01

    We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…

  14. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  15. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  16. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Mains protection. 4. ed.; Netzschutztechnik

    Energy Technology Data Exchange (ETDEWEB)

    Schossig, Thomas [Omicron Electronics GmbH, Klaus (Austria); Schossig, Walter

    2013-06-01

    Besides the description of the function of the line equipment, transformer protection functions and protection equipments, selective earth fault detection, voltage regulation, the control of the detuning information for the selection, commissioning and business management will be given. Special emphasis is put on a general adjustment standards and audit recommendations. Also transducers, auxiliary power supply and switching fault detection as well as classifications for equipment and circuit documents are mentioned. Updates affect particularly error clarification times, stimulation reliability, protection of SF6 switchgear, power directional frequency load shedding, protection of decentralized power plants, selective earth fault detection and communication in switch boards.

  18. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  19. Autonomous Quantum Error Correction with Application to Quantum Metrology

    Science.gov (United States)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  20. Boys Go Fishing, Girls Work at Home: Gender Roles, Poverty and Unequal School Access among Semi-Nomadic Fishing Communities in South Western Madagascar

    Science.gov (United States)

    Nascimento Moreira, Catarina; Rabenevanana, Man Wai; Picard, David

    2017-01-01

    Drawing from data gathered in South Western Madagascar in 2011, the work explores the combination of poverty and traditional gender roles as a critical factor in determining unequal school access among young people from semi-nomadic fishing communities. It demonstrates that from the age of early puberty, most boys go fishing with their fathers and…

  1. relay coordination in the protection of radially-connected power

    African Journals Online (AJOL)

    ... PROTECTION OF. RADIALLY-CONNECTED POWER SYSTEM NETWORK ... Protective relays detect intolerable or unwanted conditions within an assigned area, and then trip or open one ... time, and current transformer ratio errors. 2.2.1.

  2. Unequal subfamily proportions among honey bee queen and worker brood

    Science.gov (United States)

    Tilley; Oldroyd

    1997-12-01

    Queens from three colonies of feral honey bees, Apis mellifera were removed and placed in separate nucleus colonies. For each colony, eggs and larvae were taken from the nucleus and placed in the main hive on each of 3-4 consecutive weeks. Workers in the queenless parts selected young larvae to rear as queens. Queen pupae, together with the surrounding worker pupae, were removed from each colony and analysed at two to three microsatellite loci to determine their paternity. In all three colonies, the paternity of larvae chosen by the bees to rear as queens was not a random sample of the paternities in the worker brood, with certain subfamilies being over-represented in queens. These results support an important prediction of kin selection theory: when colonies are queenless, unequal relatedness within colonies could lead to the evolution of reproductive competition, that is some subfamilies achieving greater reproductive success than others. The mechanism by which such dominance is achieved could be through a system of kin recognition and nepotism, but we conclude that genetically based differential attractiveness of larvae for rearing as queens is more likely.Copyright 1997 The Association for the Study of Animal BehaviourCopyright 1997The Association for the Study of Animal Behaviour.

  3. A Hybrid Fuzzy Multi-hop Unequal Clustering Algorithm for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shawkat K. Guirguis

    2017-01-01

    Full Text Available Clustering is carried out to explore and solve power dissipation problem in wireless sensor network (WSN. Hierarchical network architecture, based on clustering, can reduce energy consumption, balance traffic load, improve scalability, and prolong network lifetime. However, clustering faces two main challenges: hotspot problem and searching for effective techniques to perform clustering. This paper introduces a fuzzy unequal clustering technique for heterogeneous dense WSNs to determine both final cluster heads and their radii. Proposed fuzzy system blends three effective parameters together which are: the distance to the base station, the density of the cluster, and the deviation of the noders residual energy from the average network energy. Our objectives are achieving gain for network lifetime, energy distribution, and energy consumption. To evaluate the proposed algorithm, WSN clustering based routing algorithms are analyzed, simulated, and compared with obtained results. These protocols are LEACH, SEP, HEED, EEUC, and MOFCA.

  4. High-Performance Region-of-Interest Image Error Concealment with Hiding Technique

    Directory of Open Access Journals (Sweden)

    Shih-Chang Hsia

    2010-01-01

    Full Text Available Recently region-of-interest (ROI based image coding is a popular topic. Since ROI area contains much more important information for an image, it must be prevented from error decoding while suffering from channel lost or unexpected attack. This paper presents an efficient error concealment method to recover ROI information with a hiding technique. Based on the progressive transformation, the low-frequency components of ROI are encoded to disperse its information into the high-frequency bank of original image. The capability of protection is carried out with extracting the ROI coefficients from the damaged image without increasing extra information. Simulation results show that the proposed method can efficiently reconstruct the ROI image when ROI bit-stream occurs errors, and the measurement of PSNR result outperforms the conventional error concealment techniques by 2 to 5 dB.

  5. TO THE SOLUTION OF PROBLEMS ABOUT THE RAILWAYS CALCULATION FOR STRENGTH TAKING INTO ACCOUNT UNEQUAL ELASTICITY OF THE SUBRAIL BASE

    Directory of Open Access Journals (Sweden)

    D. M. Kurhan

    2014-11-01

    Full Text Available Purpose. The module of elasticity of the subrail base is one of the main characteristics for an assessment intense the deformed condition of a track. Need for different cases to consider unequal elasticity of the subrail base repeatedly was considered, however, results contained rather difficult mathematical approaches and the obtained decisions didn't keep within borders of standard engineering calculation of a railway on strength. Therefore the purpose of this work is obtaining the decision within this document. Methodology. It is offered to consider a rail model as a beam which has the distributed loading of such outline corresponding to value of the module of elasticity that gives an equivalent deflection at free seating on bearing parts. Findings. The method of the accounting of gradual change of the module of elasticity of the subrail base by means of the correcting coefficient in engineering calculation of a way on strength was received. Expansion of existing calculation of railways strength was developed for the accounting of sharp change of the module of elasticity of the subrail base (for example, upon transition from a ballast design of a way on the bridge. The characteristic of change of forces operating from a rail on a basis, depending on distance to the bridge on an approach site from a ballast design of a way was received. The results of the redistribution of forces after a sudden change in the elastic modulus of the base under the rail explain the formation of vertical irregularities before the bridge. Originality. The technique of engineering calculation of railways strength for performance of calculations taking into account unequal elasticity of the subrail base was improved. Practical value. The obtained results allow carrying out engineering calculations for an assessment of strength of a railway in places of unequal elasticity caused by a condition of a way or features of a design. The solution of the return task on

  6. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  7. Cause analysis and preventives for human error events in Daya Bay NPP

    International Nuclear Information System (INIS)

    Huang Weigang; Zhang Li

    1998-01-01

    Daya Bay Nuclear Power Plant is put into commercial operation in 1994 Until 1996, there are 368 human error events in operating and maintenance area, occupying 39% of total events. These events occurred mainly in the processes of maintenance, test equipment isolation and system on-line, in particular in refuelling and maintenance. The author analyses root causes for human errorievents, which are mainly operator omission or error procedure deficiency; procedure not followed; lack of training; communication failures; work management inadequacy. The protective measures and treatment principle for human error events are also discussed, and several examples applying them are given. Finally, it is put forward that key to prevent human error event lies in the coordination and management, person in charge of work, and good work habits of staffs

  8. Influence of Current Transformer Saturation on Operation of Current Protection

    Directory of Open Access Journals (Sweden)

    F. A. Romaniouk

    2010-01-01

    Full Text Available An analysis of the influence of instrument current transformer errors on operation of current protection of power supply diagram elements has been carried out in the paper. The paper shows the influence of an aperiodic component of transient current and secondary load on current  transformer errors.Peculiar operational features of measuring elements of electromechanical and microprocessor current protection with their joint operation with electromagnetic current transformers have been analyzed in the paper.

  9. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  12. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  13. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  16. Unequal Gain of Equal Resources across Racial Groups

    Directory of Open Access Journals (Sweden)

    Shervin Assari

    2018-01-01

    Full Text Available The health effects of economic resources (eg, education, employment, and living place and psychological assets (eg, self-efficacy, perceived control over life, anger control, and emotions are well-known. This article summarizes the results of a growing body of evidence documenting Blacks’ diminished return, defined as a systematically smaller health gain from economic resources and psychological assets for Blacks in comparison to Whites. Due to structural barriers that Blacks face in their daily lives, the very same resources and assets generate smaller health gain for Blacks compared to Whites. Even in the presence of equal access to resources and assets, such unequal health gain constantly generates a racial health gap between Blacks and Whites in the United States. In this paper, a number of public policies are recommended based on these findings. First and foremost, public policies should not merely focus on equalizing access to resources and assets, but also reduce the societal and structural barriers that hinder Blacks. Policy solutions should aim to reduce various manifestations of structural racism including but not limited to differential pay, residential segregation, lower quality of education, and crime in Black and urban communities. As income was not found to follow the same pattern demonstrated for other resources and assets (ie, income generated similar decline in risk of mortality for Whites and Blacks, policies that enforce equal income and increase minimum wage for marginalized populations are essential. Improving quality of education of youth and employability of young adults will enable Blacks to compete for high paying jobs. Policies that reduce racism and discrimination in the labor market are also needed. Without such policies, it will be very difficult, if not impossible, to eliminate the sustained racial health gap in the United States.

  17. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  18. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  19. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  20. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  1. Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction

    Directory of Open Access Journals (Sweden)

    Chi-Huang Shih

    2012-01-01

    Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.

  2. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  3. Temporal clumping of prey and coexistence of unequal interferers: experiments on social forager groups of brown trout feeding on invertebrate drift

    DEFF Research Database (Denmark)

    Jonsson, Mikael; Skov, Christian; Koed, Anders

    2008-01-01

    Environmental fluctuations have been proposed to enhance the coexistence of competing phenotypes. Evaluations are here presented on the effects of prey density and short-term temporal clumping of prey availability on the relative foraging success of unequal interferers in social forager groups...... of juvenile brown trout Salmo trutta feeding on drifting invertebrate prey (frozen chironomids). Groups of three trout with established linear dominance hierarchies (dominant, intermediate and subordinate) were subjected to three different total numbers of prey, combined with three different levels...

  4. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  5. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  6. Photodouble ionization studies of the Ne(2s{sup 2}) state under unequal energy sharing conditions

    Energy Technology Data Exchange (ETDEWEB)

    Bolognesi, P [CNR-IMIP, Area della Ricerca di Roma 1, Monterotondo Scalo, Rome (Italy); Kheifets, A [Research School of Physical Sciences and Engineering, Australian National University, Canberra (Australia); Otranto, S [Physics Department, University of Missouri-Rolla, Rolla MO (United States); CONICET and Depto. de Fisica, Universidad Nacional del Sur, 8000 Bahia Blanca (Argentina); Coreno, M [CNR-IMIP, Area della Ricerca di Roma 1, Monterotondo Scalo, Rome (Italy); CNR-TASC, Gas Phase Photoemission Beamline at Elettra, Area Science Park, Trieste (Italy); Feyer, V [CNR-IMIP, Area della Ricerca di Roma 1, Monterotondo Scalo, Rome (Italy); Institute of Electron Physics, National Academy of Sciences, Uzhgorod (Ukraine); Colavecchia, F D [CONICET and Centro Atomico Bariloche, 8400 SC de Bariloche (Argentina); Garibotti, C R [CONICET and Centro Atomico Bariloche, 8400 SC de Bariloche (Argentina); Avaldi, L [CNR-IMIP, Area della Ricerca di Roma 1, Monterotondo Scalo, Rome (Italy); CNR-TASC, Gas Phase Photoemission Beamline at Elettra, Area Science Park, Trieste (Italy)

    2006-04-28

    The triple differential cross section (TDCS) of the He{sup 2+}(1s{sup -2}) and Ne{sup 2+}(2s{sup -2}) states has been studied under unequal energy sharing conditions and perpendicular geometry, for a ratio of about 3 between the energies of the two ejected electrons. The dynamical quantities which govern the photodouble ionization (PDI) process, i.e. the squared moduli of the gerade and ungerade complex amplitudes and the cosine of their relative phase, have been extracted from the experimental data. The results from the two targets have been compared between themselves as well as with the theoretical predictions of the SC3 and convergent close coupling (CCC) calculations. This work represents a joint experimental and theoretical approach to the investigation of PDI of atomic systems with more than two electrons.

  7. Design of nanophotonic circuits for autonomous subsystem quantum error correction

    Energy Technology Data Exchange (ETDEWEB)

    Kerckhoff, J; Pavlichin, D S; Chalabi, H; Mabuchi, H, E-mail: jkerc@stanford.edu [Edward L Ginzton Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2011-05-15

    We reapply our approach to designing nanophotonic quantum memories in order to formulate an optical network that autonomously protects a single logical qubit against arbitrary single-qubit errors. Emulating the nine-qubit Bacon-Shor subsystem code, the network replaces the traditionally discrete syndrome measurement and correction steps by continuous, time-independent optical interactions and coherent feedback of unitarily processed optical fields.

  8. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Visual impairment attributable to uncorrected refractive error and other causes in the Ghanaian youth: The University of Cape Coast Survey.

    Science.gov (United States)

    Abokyi, Samuel; Ilechie, Alex; Nsiah, Peter; Darko-Takyi, Charles; Abu, Emmanuel Kwasi; Osei-Akoto, Yaw Jnr; Youfegan-Baanam, Mathurin

    2016-01-01

    To determine the prevalence of visual impairment attributable to refractive error and other causes in a youthful Ghanaian population. A prospective survey of all consecutive visits by first-year tertiary students to the Optometry clinic between August, 2013 and April, 2014. Of the 4378 first-year students aged 16-39 years enumerated, 3437 (78.5%) underwent the eye examination. The examination protocol included presenting visual acuity (PVA), ocular motility, and slit-lamp examination of the external eye, anterior segment and media, and non-dilated fundus examination. Pinhole acuity and fundus examination were performed when the PVA≤6/12 in one or both eyes to determine the principal cause of the vision loss. The mean age of participants was 21.86 years (95% CI: 21.72-21.99). The prevalence of bilateral visual impairment (BVI; PVA in the better eye ≤6/12) and unilateral visual impairment UVI; PVA in the worse eye ≤6/12) were 3.08% (95% CI: 2.56-3.72) and 0.79% (95% CI: 0.54-1.14), respectively. Among 106 participants with BVI, refractive error (96.2%) and corneal opacity (3.8%) were the causes. Of the 27 participants with UVI, refractive error (44.4%), maculopathy (18.5%) and retinal disease (14.8%) were the major causes. There was unequal distribution of BVI in the different age groups, with those above 20 years having a lesser burden. Eye screening and provision of affordable spectacle correction to the youth could be timely to eliminate visual impairment. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  10. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  11. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  12. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  13. Organizational factors and reoccurrence protection on the JCO nuclear critical accident

    International Nuclear Information System (INIS)

    Takano, Kenichi

    2000-01-01

    A nuclear critical accident formed at a nuclear fuel conversion factory in Tokai-mura on September, 1999 became gradually clear not to be a simple human error formed at a level of workmen but to be an organizational error or accident relating to various organizational factors. As a nuclear power facility adopts a depth protection system fundamentally, a large accident with serious danger would not form only by a single trouble and a human error and unless some factors overlaps. By reviewing recent serious accidents and troubles, all of them seem to have a keyword of 'organizational factor'. In the JCO accident, there are some organizational factors such as a climate deviating from a manual, insufficient and loose check against change of procedure, reduction of operators from a reason of profit priority, attitude on priority of working efficiency, and so forth, which are partially common to the Chernobyl accident. Recently, accidents and troubles impossible to make them a cause of simple human error by a person but to have to say an organizational error, have increased. This trend seems to depend upon not only complication and scale-up of technology system but also graduate change of social and management systems operating them. Therefore, it seems to be necessary to introduce a concept of depth protection (multiple protection) in order to keep its reliability and safety when complicating and scaling-up of system. (G.K.)

  14. Unequal Gain of Equal Resources across Racial Groups.

    Science.gov (United States)

    Assari, Shervin

    2017-08-05

    The health effects of economic resources (eg, education, employment, and living place) and psychological assets (eg, self-efficacy, perceived control over life, anger control, and emotions) are well-known. This article summarizes the results of a growing body of evidence documenting Blacks' diminished return, defined as a systematically smaller health gain from economic resources and psychological assets for Blacks in comparison to Whites. Due to structural barriers that Blacks face in their daily lives, the very same resources and assets generate smaller health gain for Blacks compared to Whites. Even in the presence of equal access to resources and assets, such unequal health gain constantly generates a racial health gap between Blacks and Whites in the United States. In this paper, a number of public policies are recommended based on these findings. First and foremost, public policies should not merely focus on equalizing access to resources and assets, but also reduce the societal and structural barriers that hinder Blacks. Policy solutions should aim to reduce various manifestations of structural racism including but not limited to differential pay, residential segregation, lower quality of education, and crime in Black and urban communities. As income was not found to follow the same pattern demonstrated for other resources and assets (ie, income generated similar decline in risk of mortality for Whites and Blacks), policies that enforce equal income and increase minimum wage for marginalized populations are essential. Improving quality of education of youth and employability of young adults will enable Blacks to compete for high paying jobs. Policies that reduce racism and discrimination in the labor market are also needed. Without such policies, it will be very difficult, if not impossible, to eliminate the sustained racial health gap in the United States. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open

  15. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  16. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  17. Iterative channel decoding of FEC-based multiple-description codes.

    Science.gov (United States)

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  18. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  19. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  20. Investigation of Unequal Planar Wireless Electricity Device for Efficient Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    M. H. Mohd Salleh

    2017-04-01

    Full Text Available This article focuses on the design and investigation of a pair of unequally sized wireless electricity (Witricity devices that are equipped with integrated planar coil strips. The proposed pair of devices consists of two different square-shaped resonator sizes of 120 mm × 120 mm and 80 mm × 80 mm, acting as a transmitter and receiver, respectively. The devices are designed, simulated and optimized using the CST Microwave Studio software prior to being fabricated and verified using a vector network analyzer (VNA. The surface current results of the coupled devices indicate a good current density at 10 mm to 30 mm distance range. This good current density demonstrates that the coupled devices’ surface has more electric current per unit area, which leads to a good performance up to 30 mm range. Hence, the results also reveal good coupling efficiency between the coupled devices, which is approximately 54.5% at up to a 30 mm distance, with both devices axially aligned. In addition, a coupling efficiency of 50% is achieved when a maximum lateral misalignment (LM of 10 mm, and a varied angular misalignment (AM from 0° to 40° are implemented to the proposed device.

  1. Software for radiation protection

    International Nuclear Information System (INIS)

    Graffunder, H.

    2002-01-01

    The software products presented are universally usable programs for radiation protection. The systems were designed in order to establish a comprehensive database specific to radiation protection and, on this basis, model in programs subjects of radiation protection. Development initially focused on the creation of the database. Each software product was to access the same nuclide-specific data; input errors and differences in spelling were to be excluded from the outset. This makes the products more compatible with each other and able to exchange data among each other. The software products are modular in design. Functions recurring in radiation protection are always treated the same way in different programs, and also represented the same way on the program surface. The recognition effect makes it easy for users to familiarize with the products quickly. All software products are written in German and are tailored to the administrative needs and codes and regulations in Germany and in Switzerland. (orig.) [de

  2. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  3. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  4. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  5. [Individual prevention of occupational contact dermatitis: protective gloves and skin protection recommendations as part of the patient management scheme by the public statutory employers' liability insurance].

    Science.gov (United States)

    Wilke, A; Skudlik, C; Sonsmann, F K

    2018-05-02

    The dermatologist's procedure is a pivotal tool for early recognition of occupational contact dermatitis (OCD), for reporting OCD cases to the statutory accident insurance and for treating the diseases. The employer is in charge of implementing skin protection measures at the workplace. However, in terms of an individual prevention approach it may be necessary to propose targeted skin protection recommendations in specific patient cases. The patient's own skin protection behavior significantly contributes to regenerating and maintaining healthy skin. This behavior includes the use of occupational skin products, and in particular the correct use of appropriately selected protective gloves. Protective gloves are the most important personal protective measure in the prevention of OCD. Prevention services, occupational health and safety specialists, occupational physicians and centers specialized in occupational dermatology can support the identification of suitable protective measures. Nowadays, suitable protective gloves exist for (almost) every occupational activity and exposure. However, improper use in practice can become a risk factor by itself for the skin (e. g., incorrectly used gloves). Therefore, it is of utmost importance to identify application errors, to educate patients in terms of skin protection and to motivate them to perform an appropriate skin protection behavior. With particular focus on protective gloves, this article gives an overview of various types, materials and potentially glove-related allergens, presents strategies for reducing occlusion effects and discusses some typical application errors and solutions.

  6. Income inequality and status seeking: searching for positional goods in unequal U.S. States.

    Science.gov (United States)

    Walasek, Lukasz; Brown, Gordon D A

    2015-04-01

    It is well established that income inequality is associated with lower societal well-being, but the psychosocial causes of this relationship are poorly understood. A social-rank hypothesis predicts that members of unequal societies are likely to devote more of their resources to status-seeking behaviors such as acquiring positional goods. We used Google Correlate to find search terms that correlated with our measure of income inequality, and we controlled for income and other socioeconomic factors. We found that of the 40 search terms used more frequently in states with greater income inequality, more than 70% were classified as referring to status goods (e.g., designer brands, expensive jewelry, and luxury clothing). In contrast, 0% of the 40 search terms used more frequently in states with less income inequality were classified as referring to status goods. Finally, we showed how residual-based analysis offers a new methodology for using Google Correlate to provide insights into societal attitudes and motivations while avoiding confounds and high risks of spurious correlations. © The Author(s) 2015.

  7. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  8. Dual-mode nonlinear instability analysis of a confined planar liquid sheet sandwiched between two gas streams of unequal velocities and prediction of droplet size and velocity distribution using maximum entropy formulation

    Science.gov (United States)

    Dasgupta, Debayan; Nath, Sujit; Bhanja, Dipankar

    2018-04-01

    Twin fluid atomizers utilize the kinetic energy of high speed gases to disintegrate a liquid sheet into fine uniform droplets. Quite often, the gas streams are injected at unequal velocities to enhance the aerodynamic interaction between the liquid sheet and surrounding atmosphere. In order to improve the mixing characteristics, practical atomizers confine the gas flows within ducts. Though the liquid sheet coming out of an injector is usually annular in shape, it can be considered to be planar as the mean radius of curvature is much larger than the sheet thickness. There are numerous studies on breakup of the planar liquid sheet, but none of them considered the simultaneous effects of confinement and unequal gas velocities on the spray characteristics. The present study performs a nonlinear temporal analysis of instabilities in the planar liquid sheet, produced by two co-flowing gas streams moving with unequal velocities within two solid walls. The results show that the para-sinuous mode dominates the breakup process at all flow conditions over the para-varicose mode of breakup. The sheet pattern is strongly influenced by gas velocities, particularly for the para-varicose mode. Spray characteristics are influenced by both gas velocity and proximity to the confining wall, but the former has a much more pronounced effect on droplet size. An increase in the difference between gas velocities at two interfaces drastically shifts the droplet size distribution toward finer droplets. Moreover, asymmetry in gas phase velocities affects the droplet velocity distribution more, only at low liquid Weber numbers for the input conditions chosen in the present study.

  9. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  10. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  11. A concatenated coding scheme for biometric template protection

    NARCIS (Netherlands)

    Shao, X.; Xu, H.; Veldhuis, Raymond N.J.; Slump, Cornelis H.

    2012-01-01

    Cryptography may mitigate the privacy problem in biometric recognition systems. However, cryptography technologies lack error-tolerance and biometric samples cannot be reproduced exactly, rising the robustness problem. The biometric template protection system needs a good feature extraction

  12. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  13. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  14. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  15. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  16. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  17. Fatigue proofing: The role of protective behaviours in mediating fatigue-related risk in a defence aviation environment.

    Science.gov (United States)

    Dawson, Drew; Cleggett, Courtney; Thompson, Kirrilly; Thomas, Matthew J W

    2017-02-01

    In the military or emergency services, operational requirements and/or community expectations often preclude formal prescriptive working time arrangements as a practical means of reducing fatigue-related risk. In these environments, workers sometimes employ adaptive or protective behaviours informally to reduce the risk (i.e. likelihood or consequence) associated with a fatigue-related error. These informal behaviours enable employees to reduce risk while continuing to work while fatigued. In this study, we documented the use of informal protective behaviours in a group of defence aviation personnel including flight crews. Semi-structured interviews were conducted to determine whether and which protective behaviours were used to mitigate fatigue-related error. The 18 participants were from aviation-specific trades and included aircrew (pilots and air-crewman) and aviation maintenance personnel (aeronautical engineers and maintenance personnel). Participants identified 147 ways in which they and/or others act to reduce the likelihood or consequence of a fatigue-related error. These formed seven categories of fatigue-reduction strategies. The two most novel categories are discussed in this paper: task-related and behaviour-based strategies. Broadly speaking, these results indicate that fatigued military flight and maintenance crews use protective 'fatigue-proofing' behaviours to reduce the likelihood and/or consequence of fatigue-related error and were aware of the potential benefits. It is also important to note that these behaviours are not typically part of the formal safety management system. Rather, they have evolved spontaneously as part of the culture around protecting team performance under adverse operating conditions. When compared with previous similar studies, aviation personnel were more readily able to understand the idea of fatigue proofing than those from a fire-fighting background. These differences were thought to reflect different cultural

  18. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  19. Margin benefit assessment of the YGN 3 cycle 1 fxy error files for COLSS and CPC overall uncertainty analyses

    International Nuclear Information System (INIS)

    Yoon, Rae Young; In, Wang Kee; Auh, Geun Sun; Kim, Hee Cheol; Lee, Sang Keun

    1994-01-01

    Margin benefits are quantitatively assessed for the Yonggwang Unit 3 (YGN 3) Cycle 1 planar radial peaking factor (Fxy) error files for each time-in-life, i.e., BOC, IOC, MOC and EOC. The generic Fxy error file (FXYMEQO) is presently used for Yonggwang Unit 3 Cycle 1 COLSS (Core Operating Limit Supervisory System) and CPC (Core Protection Calculator) Overall Uncertainty Analyses (OUA). However, because this file is more conservative than the plant/cycle specific Fxy error files, COLSS and CPC thermal margins (DNB-OPM) for the generic Fxy error file are less than those of the plant/cycle specific Fxy error file. Therefore, the YGN 3 Cycle 1 Fxy error files were generated and analyzed by the modified codes for Yonggwang Plants. The YGN 3 Cycle 1 Fxy error files gave the increased thermal margin by about 1% for COLSS and CPC, respectively

  20. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  1. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  2. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  3. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  4. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  5. Unequal-thickness billet optimization in transitional region during isothermal local loading forming of Ti-alloy rib-web component using response surface method

    Directory of Open Access Journals (Sweden)

    Ke WEI

    2018-04-01

    Full Text Available Avoiding the folding defect and improving the die filling capability in the transitional region are desired in isothermal local loading forming of a large-scale Ti-alloy rib-web component (LTRC. To achieve a high-precision LTRC, the folding evolution and die filling process in the transitional region were investigated by 3D finite element simulation and experiment using an equal-thickness billet (ETB. It is found that the initial volume distribution in the second-loading region can greatly affect the amount of material transferred into the first-loading region during the second-loading step, and thus lead to the folding defect. Besides, an improper initial volume distribution results in non-concurrent die filling in the cavities of ribs after the second-loading step, and then causes die underfilling. To this end, an unequal-thickness billet (UTB was employed with the initial volume distribution optimized by the response surface method (RSM. For a certain eigenstructure, the critical value of the percentage of transferred material determined by the ETB was taken as a constraint condition for avoiding the folding defect in the UTB optimization process, and the die underfilling rate was considered as the optimization objective. Then, based on the RSM models of the percentage of transferred material and the die underfilling rate, non-folding parameter combinations and optimum die filling were achieved. Lastly, an optimized UTB was obtained and verified by the simulation and experiment. Keywords: Die filling, Folding defect, Isothermal local loading forming, Transitional region, Unequal-thickness billet optimization

  6. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  7. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  8. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  9. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  10. Coherence protection by random coding

    International Nuclear Information System (INIS)

    Brion, E; Akulin, V M; Dumer, I; Harel, G; Kurizki, G

    2005-01-01

    We show that the multidimensional Zeno effect combined with non-holonomic control allows one to efficiently protect quantum systems from decoherence by a method similar to classical random coding. The method is applicable to arbitrary error-inducing Hamiltonians and general quantum systems. The quantum encoding approaches the Hamming upper bound for large dimension increases. Applicability of the method is demonstrated with a seven-qubit toy computer

  11. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  12. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  13. Deviating measurements in radiation protection. Legal assessment of deviations in radiation protection measurements

    International Nuclear Information System (INIS)

    Hoegl, A.

    1996-01-01

    This study investigates how, from a legal point of view, deviations in radiation protection measurements should be treated in comparisons between measured results and limits stipulated by nuclear legislation or goods transport regulations. A case-by-case distinction is proposed which is based on the legal concequences of the respective measurement. Commentaries on nuclear law contain no references to the legal assessment of deviating measurements in radiation protection. The examples quoted in legal commentaries on civil and criminal proceedings of the way in which errors made in measurements for speed control and determinations of the alcohol content in the blood are to be taken into account, and a commentary on ozone legislation, are examined for analogies with radiation protection measurements. Leading cases in the nuclear field are evaluated in the light of the requirements applying in case of deviations in measurements. The final section summarizes the most important findings and conclusions. (orig.) [de

  14. Flexible Macroblock Ordering for Context-Aware Ultrasound Video Transmission over Mobile WiMAX

    Science.gov (United States)

    Martini, Maria G.; Hewage, Chaminda T. E. R.

    2010-01-01

    The most recent network technologies are enabling a variety of new applications, thanks to the provision of increased bandwidth and better management of Quality of Service. Nevertheless, telemedical services involving multimedia data are still lagging behind, due to the concern of the end users, that is, clinicians and also patients, about the low quality provided. Indeed, emerging network technologies should be appropriately exploited by designing the transmission strategy focusing on quality provision for end users. Stemming from this principle, we propose here a context-aware transmission strategy for medical video transmission over WiMAX systems. Context, in terms of regions of interest (ROI) in a specific session, is taken into account for the identification of multiple regions of interest, and compression/transmission strategies are tailored to such context information. We present a methodology based on H.264 medical video compression and Flexible Macroblock Ordering (FMO) for ROI identification. Two different unequal error protection methodologies, providing higher protection to the most diagnostically relevant data, are presented. PMID:20827292

  15. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  16. Binary gabor statistical features for palmprint template protection

    NARCIS (Netherlands)

    Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2012-01-01

    The biometric template protection system requires a highquality biometric channel and a well-designed error correction code (ECC). Due to the intra-class variations of biometric data, an efficient fixed-length binary feature extractor is required to provide a high-quality biometric channel so that

  17. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  18. Critiques of World-Systems Analysis and Alternatives: Unequal Exchange and Three Forms of Class and Struggle in the Japan–US Silk Network, 1880–1890

    Directory of Open Access Journals (Sweden)

    Elson E. Boles

    2015-08-01

    Full Text Available Sympathetic critics of world-system analysis contend that its systemic level of abstraction results in one-sided generalizations of systemic change. Unequal exchange theory and commodity chain analysis similarly reduce distinct and historical forms of labor and their interrelationships to common functional and ahistorical essences. This paper applies an incorporated comparisons method to give historical content to an understanding of unequal exchange and global inequality through a study of the Japan–US silk network’s formation and change during the mid 1880–1890s. Analysis of unequal exchange processes requires, in this case, an examination of the mutual integration and transformation of distinct labor and value forms —peasant sericulture, ?lature wage-labor, and industrial silk factory wage-labor—and the infundibular market forces they structured. These relations were decisively conditioned by new landlordism and debt-peonage, class-patriarchy, state mediations, migration, and by peasant and worker struggles against deteriorating conditions. Indeed, the transitional nature of the silk network’s formation, which concluded the Tokugawa system and decisively contributed to Japan’s emergence as a nation-state of the capitalist world-economy, was signi?ed by the very last millenarian and quasi-modern peasant uprising in 1884 among indebted sericulturists, the very ?rst recorded factory strikes in 1885–86, by women raw silk reelers in K?fu, and by strikes among unionizing workers in patriarchal and mechanized silk factories in Paterson, New Jersey, 1885–86 (Boles 1996, 1998. The “local” conditions of each con?ict were molded by the interdependence of those conditions that constituted a formative part of the world-system and its development. In the face of struggles and intensifying world-market competition, Japanese and US manufacturers took opposite spatial strategies of regional expansion to overcome the structural constraints of

  19. 45 CFR 61.6 - Reporting errors, omissions, revisions or whether an action is on appeal.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Reporting errors, omissions, revisions or whether an action is on appeal. 61.6 Section 61.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION HEALTHCARE INTEGRITY AND PROTECTION DATA BANK FOR FINAL ADVERSE INFORMATION ON...

  20. Analysis of covariance with pre-treatment measurements in randomized trials under the cases that covariances and post-treatment variances differ between groups.

    Science.gov (United States)

    Funatogawa, Takashi; Funatogawa, Ikuko; Shyr, Yu

    2011-05-01

    When primary endpoints of randomized trials are continuous variables, the analysis of covariance (ANCOVA) with pre-treatment measurements as a covariate is often used to compare two treatment groups. In the ANCOVA, equal slopes (coefficients of pre-treatment measurements) and equal residual variances are commonly assumed. However, random allocation guarantees only equal variances of pre-treatment measurements. Unequal covariances and variances of post-treatment measurements indicate unequal slopes and, usually, unequal residual variances. For non-normal data with unequal covariances and variances of post-treatment measurements, it is known that the ANCOVA with equal slopes and equal variances using an ordinary least-squares method provides an asymptotically normal estimator for the treatment effect. However, the asymptotic variance of the estimator differs from the variance estimated from a standard formula, and its property is unclear. Furthermore, the asymptotic properties of the ANCOVA with equal slopes and unequal variances using a generalized least-squares method are unclear. In this paper, we consider non-normal data with unequal covariances and variances of post-treatment measurements, and examine the asymptotic properties of the ANCOVA with equal slopes using the variance estimated from a standard formula. Analytically, we show that the actual type I error rate, thus the coverage, of the ANCOVA with equal variances is asymptotically at a nominal level under equal sample sizes. That of the ANCOVA with unequal variances using a generalized least-squares method is asymptotically at a nominal level, even under unequal sample sizes. In conclusion, the ANCOVA with equal slopes can be asymptotically justified under random allocation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  2. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  3. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    Science.gov (United States)

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures

  4. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  5. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  6. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  7. Awareness of technology-induced errors and processes for identifying and preventing such errors.

    Science.gov (United States)

    Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W

    2015-01-01

    There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.

  8. Green” Technology and Ecologically Unequal Exchange: The Environmental and Social Consequences of Ecological Modernization in the World-System

    Directory of Open Access Journals (Sweden)

    Eric Bonds

    2015-08-01

    Full Text Available This paper contributes to understandings of ecologically unequal exchange within the world-systems perspective by offering a series of case studies of ecological modernization in the automobile industry. The case studies demonstrate that “green” technologies developed and instituted in core nations often require specific raw materials that are extracted from the periphery and semi-periphery. Extraction of such natural resources causes significant environmental degradation and often displaces entire communities from their land. Moreover, because states often use violence and repression to facilitate raw material extraction, the widespread commercialization of “green” technologies can result in serious human rights violations. These findings challenge ecological modernization theory, which rests on the assumption that the development and commercialization of more ecologically-efficient technologies is universally beneficial.

  9. The sigh of the oppressed: The palliative effects of ideology are stronger for people living in highly unequal neighbourhoods.

    Science.gov (United States)

    Sengupta, Nikhil K; Greaves, Lara M; Osborne, Danny; Sibley, Chris G

    2017-09-01

    Ideologies that legitimize status hierarchies are associated with increased well-being. However, which ideologies have 'palliative effects', why they have these effects, and whether these effects extend to low-status groups remain unresolved issues. This study aimed to address these issues by testing the effects of the ideology of Symbolic Prejudice on well-being among low- and high-status ethnic groups (4,519 Europeans and 1,091 Māori) nested within 1,437 regions in New Zealand. Results showed that Symbolic Prejudice predicted increased well-being for both groups, but that this relationship was stronger for those living in highly unequal neighbourhoods. This suggests that it is precisely those who have the strongest need to justify inequality that accrue the most psychological benefit from subscribing to legitimizing ideologies. © 2017 The British Psychological Society.

  10. Error and Congestion Resilient Video Streaming over Broadband Wireless

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2015-04-01

    Full Text Available In this paper, error resilience is achieved by adaptive, application-layer rateless channel coding, which is used to protect H.264/Advanced Video Coding (AVC codec data-partitioned videos. A packetization strategy is an effective tool to control error rates and, in the paper, source-coded data partitioning serves to allocate smaller packets to more important compressed video data. The scheme for doing this is applied to real-time streaming across a broadband wireless link. The advantages of rateless code rate adaptivity are then demonstrated in the paper. Because the data partitions of a video slice are each assigned to different network packets, in congestion-prone wireless networks the increased number of packets per slice and their size disparity may increase the packet loss rate from buffer overflows. As a form of congestion resilience, this paper recommends packet-size dependent scheduling as a relatively simple way of alleviating the buffer-overflow problem arising from data-partitioned packets. The paper also contributes an analysis of data partitioning and packet sizes as a prelude to considering scheduling regimes. The combination of adaptive channel coding and prioritized packetization for error resilience with packet-size dependent packet scheduling results in a robust streaming scheme specialized for broadband wireless and real-time streaming applications such as video conferencing, video telephony, and telemedicine.

  11. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  12. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  13. Occupational inequalities in health expectancies in France in the early 2000s: Unequal chances of reaching and living retirement in good health

    Directory of Open Access Journals (Sweden)

    Emmanuelle Cambois

    2011-08-01

    Full Text Available Increasing life expectancy (LE raises expectations for social participation at later ages. We computed health expectancies (HE to assess the (unequal chances of social/work participation after age 50 in the context of France in 2003. We considered five HEs, covering various health situations which can jeopardize participation, and focused on both older ages and the pre-retirement period. HEs reveal large inequalities for both sexes in the chances of remaining healthy after retirement, and also of reaching retirement age in good health and without disability, especially in low-qualified occupations. These results challenge the policy expectation of an overall increase in social participation at later ages.

  14. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  15. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  16. The Influence of the Mounting Errors in RodToothed Transmissions

    Directory of Open Access Journals (Sweden)

    M. Yu. Sachkov

    2015-01-01

    Full Text Available In the paper we consider an approximate transmission. The work is aimed at development of gear-powered transmission on parallel axes, which is RF patent-protected. The paper justifies a relevance of the synthesis of new kinds of engagement with the simplified geometry of the contacting condition. A typical solution for powered mechanisms received by F. L. Livinin and his disciples is characterized.The paper describes the arrangement of the coordinate systems used to obtain the function of the position of the gear-powered transmission consisting of two wheels with fifteen leads. For them, also the coordinates of the contact points are obtained, and errors of function of the position in tooth changeover are calculated. To obtain the function position was used a method of matrix transformation and equality of radius and unit normal vectors at the contact point. This transmission can be used in mechanical and instrumentation engineering, and other sectors of the economy. Both reducers and multipliers can be made on its basis. It has high manufacturability (with no special equipment required for its production, and a displacement function is close to linear.This article describes the influence of the axle spacing error on the quality of the transmission characteristics. The paper presents the graphic based relationships and tabular estimates for nominal axle spacing and offsets within 0.2 mm. This error of axle spacing is significant for gearing. From the results of this work we can say that the transmission is almost insensitive to errors of axle spacing. Engagement occurs without an exit of contact point on the lead edge. To solve the obtained system of equations, the numerical methods of the software MathCAD package have been applied.In the future, the authors expect to consider other possible manufacturing and mounting errors of gear-powered transmission (such as the error of the step, misalignment, etc. to assess their impact on the quality

  17. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  18. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  19. Professional installation of overvoltage protection devices. The most common installation errors; Fachgerechte Installation von Ueberspannungsschutzgeraeten. Die haeufigsten Installationsfehler

    Energy Technology Data Exchange (ETDEWEB)

    Gmelch, L. [Dehn und Soehne GmbH und Co. KG, Neumarkt (Germany)

    2007-07-01

    Increasingly sensitive electronic equipment and high demands on plant and system availability necesstate effective protection against lightning and overvoltage. Apart from the measures that must be taken already in the projecting phase in order to minimize the risk of interferences, disturbances and destruction of plants and systems, there are also some basic principles that must be observed when installing overvoltage protection equipment. If these are neglected, the protective function of overvoltage protection systems may be seriously impaired. (orig.)

  20. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  1. Brewing Unequal Exchanges in Coffee: A Qualitative Investigation into the Consequences of the Java Trade in Rural Uganda

    Directory of Open Access Journals (Sweden)

    Kelly F. Austin

    2017-08-01

    Full Text Available This study represents a qualitative case study examining the broad impacts of coffee cultivation from a rural region in Eastern Uganda, the Bududa District. Over 20 interviews with coffee cultivators provide insights into how the coffee economy impacts gender relations, physical health, deforestation, and economic conditions. While there are some material benefits from cultivating and selling coffee beans, a lack of long-term economic stability for households and the consequences for the status of women, the health of the community, and the local environment calls into question the efficacy of coffee production as a viable development scheme that significantly enhances overall community well-being. This research hopes to bring attention to the mechanisms that enable broader unequal exchange relationships by focusing on the perspectives and experiences of growers in Bududa, Uganda, where a considerable amount of world coffee is grown and supplied to consumers in core nations.

  2. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  3. Error-information in tutorial documentation: Supporting users' errors to facilitate initial skill learning

    NARCIS (Netherlands)

    Lazonder, Adrianus W.; van der Meij, Hans

    1995-01-01

    Novice users make many errors when they first try to learn how to work with a computer program like a spreadsheet or wordprocessor. No matter how user-friendly the software or the training manual, errors can and will occur. The current view on errors is that they can be helpful or disruptive,

  4. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  5. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    Science.gov (United States)

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  6. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  7. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  8. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  9. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  10. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  11. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  12. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  13. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    Science.gov (United States)

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  14. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  15. Errorful and errorless learning: The impact of cue-target constraint in learning from errors.

    Science.gov (United States)

    Bridger, Emma K; Mecklinger, Axel

    2014-08-01

    The benefits of testing on learning are well described, and attention has recently turned to what happens when errors are elicited during learning: Is testing nonetheless beneficial, or can errors hinder learning? Whilst recent findings have indicated that tests boost learning even if errors are made on every trial, other reports, emphasizing the benefits of errorless learning, have indicated that errors lead to poorer later memory performance. The possibility that this discrepancy is a function of the materials that must be learned-in particular, the relationship between the cues and targets-was addressed here. Cued recall after either a study-only errorless condition or an errorful learning condition was contrasted across cue-target associations, for which the extent to which the target was constrained by the cue was either high or low. Experiment 1 showed that whereas errorful learning led to greater recall for low-constraint stimuli, it led to a significant decrease in recall for high-constraint stimuli. This interaction is thought to reflect the extent to which retrieval is constrained by the cue-target association, as well as by the presence of preexisting semantic associations. The advantage of errorful retrieval for low-constraint stimuli was replicated in Experiment 2, and the interaction with stimulus type was replicated in Experiment 3, even when guesses were randomly designated as being either correct or incorrect. This pattern provides support for inferences derived from reports in which participants made errors on all learning trials, whilst highlighting the impact of material characteristics on the benefits and disadvantages that accrue from errorful learning in episodic memory.

  16. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  17. A Degree Distribution Optimization Algorithm for Image Transmission

    Science.gov (United States)

    Jiang, Wei; Yang, Junjie

    2016-09-01

    Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.

  18. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  19. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  20. Australians are not equally protected from industrial air pollution

    International Nuclear Information System (INIS)

    Dobbie, B; Green, D

    2015-01-01

    Australian air pollution standards are set at national and state levels for a number of chemicals harmful to human health. However, these standards do not need to be met when ad hoc pollution licences are issued by state environment agencies. This situation results in a highly unequal distribution of air pollution between towns and cities, and across the country. This paper examines these pollution regulations through two case studies, specifically considering the ability of the regulatory regime to protect human health from lead and sulphur dioxide pollution in the communities located around smelters. It also considers how the proposed National Clean Air Agreement, once enacted, might serve to reduce this pollution equity problem. Through the case studies we show that there are at least three discrete concerns relating to the current licencing system. They are: non-onerous emission thresholds for polluting industry; temporal averaging thresholds masking emission spikes; and ineffective penalties for breaching licence agreements. In conclusion, we propose a set of new, legally-binding national minimum standards for industrial air pollutants must be developed and enforced, which can only be modified by more (not less) stringent state licence arrangements. (letter)

  1. An improved energy aware distributed unequal clustering protocol for heterogeneous wireless sensor networks

    Directory of Open Access Journals (Sweden)

    Vrinda Gupta

    2016-06-01

    Full Text Available In this paper, an improved version of the energy aware distributed unequal clustering protocol (EADUC is projected. The EADUC protocol is commonly used for solving energy hole problem in multi-hop wireless sensor networks. In the EADUC, location of base station and residual energy are given importance as clustering parameters. Based on these parameters, different competition radii are assigned to nodes. Herein, a new approach has been proposed to improve the working of EADUC, by electing cluster heads considering number of nodes in the neighborhood in addition to the above two parameters. The inclusion of the neighborhood information for computation of the competition radii provides better balancing of energy in comparison with the existing approach. Furthermore, for the selection of next hop node, the relay metric is defined directly in terms of energy expense instead of only the distance information used in the EADUC and the data transmission phase has been extended in every round by performing the data collection number of times through use of major slots and mini-slots. The methodology used is of retaining the same clusters for a few rounds and is effective in reducing the clustering overhead. The performance of the proposed protocol has been evaluated under three different scenarios and compared with existing protocols through simulations. The results show that the proposed scheme outperforms the existing protocols in terms of network lifetime in all the scenarios.

  2. Putting into practice error management theory: Unlearning and learning to manage action errors in construction.

    Science.gov (United States)

    Love, Peter E D; Smith, Jim; Teo, Pauline

    2018-05-01

    Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  4. The role of radiologic technologist in radiation protection and quality assurance programs

    International Nuclear Information System (INIS)

    Djurovic, B.; Spasci -Jokic, V.; Misovic, M.

    2001-01-01

    The most important sources of ionizing radiation for general public are medical sources. Good working protocols and radiological protections measurements provided significant reduction of patients and professional doses. Medical users of ionizing radiation are radiological technologists. The purpose of this paper is to point out to several facts and errors in radiation protection educational programs for radiological technologists. Medical College educational program covers main specific topics in radiation protection, but there are some omissions in training process. Radiological technologists must be actively involved in radiation protection. Following ethical standards they will reach higher standards than the law requires

  5. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  6. Adaptive live multicast video streaming of SVC with UEP FEC

    Science.gov (United States)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  7. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  8. Hydraulic behaviour of a partially uncovered core

    International Nuclear Information System (INIS)

    Fischer, K.; Hafner, W.

    1989-10-01

    A critical review of experimental data and theoretical models relevant to the thermohydraulic processes in a partially uncovered core has been performed. Presently available optimized thermohydraulic codes should be able to predict swell level elevations within an error band of ± 0.5 m. Rod temperature rising velocities could be predicted within an error bandwidth of ± 10%, provided the correct rod heat capacity is given. A general statement about the accuracy of predicted rod temperatures is not possible because the errors increase with simulation time. Highest errors are expected for long transients with low heating rates and low steam velocities. As a result, three areas for additional research are suggested: - a high-pressure test at 120 bar to complete the void correlation data base, - a low steam flow - low power experiment to improve heat transfer correlations, - a numerical investigation of three-dimensional effects in the reactor core with unequally heated rod bundles. For the present state of 1-dimensional experiments and models, suggestions for a satisfactory modeling have been derived. The suggested further work could improve the modelling capabilities and the code reliability for some limiting cases like high pressure boil-off, low-power long-term steam cooling, and unequal heating of neighbouring bundles considerably

  9. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  10. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  11. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  12. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  13. Estimation of reliability on digital plant protection system in nuclear power plants using fault simulation with self-checking

    International Nuclear Information System (INIS)

    Lee, Jun Seok; Kim, Suk Joon; Seong, Poong Hyun

    2004-01-01

    Safety-critical digital systems in nuclear power plants require high design reliability. Reliable software design and accurate prediction methods for the system reliability are important problems. In the reliability analysis, the error detection coverage of the system is one of the crucial factors, however, it is difficult to evaluate the error detection coverage of digital instrumentation and control system in nuclear power plants due to complexity of the system. To evaluate the error detection coverage for high efficiency and low cost, the simulation based fault injections with self checking are needed for digital instrumentation and control system in nuclear power plants. The target system is local coincidence logic in digital plant protection system and a simplified software modeling for this target system is used in this work. C++ based hardware description of micro computer simulator system is used to evaluate the error detection coverage of the system. From the simulation result, it is possible to estimate the error detection coverage of digital plant protection system in nuclear power plants using simulation based fault injection method with self checking. (author)

  14. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  15. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  16. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  17. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  18. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  19. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  20. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  1. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  2. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  3. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  4. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  5. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    Science.gov (United States)

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  6. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x...... measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction is less than...

  7. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  8. Error threshold ghosts in a simple hypercycle with error prone self-replication

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2008-01-01

    A delayed transition because of mutation processes is shown to happen in a simple hypercycle composed by two indistinguishable molecular species with error prone self-replication. The appearance of a ghost near the hypercycle error threshold causes a delay in the extinction and thus in the loss of information of the mutually catalytic replicators, in a kind of information memory. The extinction time, τ, scales near bifurcation threshold according to the universal square-root scaling law i.e. τ ∼ (Q hc - Q) -1/2 , typical of dynamical systems close to a saddle-node bifurcation. Here, Q hc represents the bifurcation point named hypercycle error threshold, involved in the change among the asymptotic stability phase and the so-called Random Replication State (RRS) of the hypercycle; and the parameter Q is the replication quality factor. The ghost involves a longer transient towards extinction once the saddle-node bifurcation has occurred, being extremely long near the bifurcation threshold. The role of this dynamical effect is expected to be relevant in fluctuating environments. Such a phenomenon should also be found in larger hypercycles when considering the hypercycle species in competition with their error tail. The implications of the ghost in the survival and evolution of error prone self-replicating molecules with hypercyclic organization are discussed

  9. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  10. On-Error Training (Book Excerpt).

    Science.gov (United States)

    Fukuda, Ryuji

    1985-01-01

    This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…

  11. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  12. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  13. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  14. Lightning protection system analysis at Multipurpose Reactor G A. Siwabessy building

    International Nuclear Information System (INIS)

    Teguh-Sulistyo

    2003-01-01

    Analysis to the part of lightning protection system at Multi Purpose Reactor GA Siwabessy (RSG-GAS) have been done. Observation examined the damage of some part of the earthing system caused by human error of chemically system. The analysis performed some assumptions and simulations to the points of lightning stroke. From this analysis obtained that the reactor building do not have vertical finial which can protect effectively to the whole reactor building and auxiliary building. Installing some new finials at some places are needed to protect building therefore the reactor building and auxiliary building well safe from lighting stroke

  15. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  16. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  17. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  18. Medication errors: an overview for clinicians.

    Science.gov (United States)

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  19. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  20. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  1. EPA Contribution to Manuscript "Evaluation and Error Apportionment of an Ensemble of Atmospheric Chemistry Transport Modelling Systems: Multi-variable Temporal and Spatial Breakdown"

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains the data contributed by EPA/ORD/NERL/CED researchers to the manuscript "Evaluation and Error Apportionment of an Ensemble of Atmospheric...

  2. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  3. Can conditional cash transfer programs generate equality of opportunity in highly unequal societies? Evidence from Brazil

    Directory of Open Access Journals (Sweden)

    Simone Bohn

    2014-09-01

    Full Text Available This article examines whether the state, through conditional cash transfer programs (CCT, can reduce the poverty and extremely poverty in societies marred by high levels of income concentration. We focus on one of the most unequal countries in the globe, Brazil, and analyze the extent to which this country's CCT program - Bolsa Família (BF, Family Grant program - is able to improve the life chances of extremely poor beneficiaries, through the three major goals of PBF: First, to immediately end hunger; second, to create basic social rights related to healthcare and education; finally, considering also complementary policies, to integrate adults into the job market. The analysis relies on a quantitative survey with 4,000 beneficiaries and a qualitative survey comprised of in-depth interviews with 38 program's participants from all the regions of the country in 2008, it means that this study is about the five first years of the PBF. In order to answer the research questions, we ran four probit analyses related: a the determinants of the realization of prenatal care; b the determinants of food security among BF beneficiaries, c the determinants that adult BF recipients will return to school, d the determinants that a BF beneficiary will obtain a job. Important results from the study are: First, those who before their participation on PBF were at the margins have now been able to access healthcare services on a more regular basis. Thus, the women at the margins who were systematically excluded - black women, poorly educated and from the North - now, after their participation in the CCT program, have more access to prenatal care and can now count with more availability of public healthcare network. Second, before entering the Bolsa Família program, 50.3% of the participants faced severe food insecurity. This number went down to 36.8% in very five years. Men are more likely than women; non-blacks more likely than blacks; and South and Centre

  4. Multiple implementation of a reactor protection code in PHI2, PASCAL, and IFTRAN on the SIEMENS-330 computer

    International Nuclear Information System (INIS)

    Gmeiner, L.; Lemperle, W.; Voges, U.

    1978-01-01

    In safety related computer applications, as in the case of a reactor protection system considered here, mostly multi-computer systems are necessary for reasons of reliability and availability. The hardware structure of the protection system and the software requierements derived from it are explained. In order to study the effects of diversified programming of the three computers the protection codes were implemented in the languages IFTRAN, PASCAL, and PHI2. According to the experience gained diversified programming seems to be a proper means to prevent identical programming errors in all three computers on one hand and to detect ambiguities of the specification on the other. During all of the experiment the errors occurring were recorded in detail and at the moment are being evaluated. (orig./WB) [de

  5. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  6. Automated testing of reactor protection instrumentation made easy

    International Nuclear Information System (INIS)

    Iborra, A.; De Marcos, F.; Pastor, J.A.; Alvarez, B.; Jimenez, A.; Mesa, E.; Alsonso, L.; Regidor, J.J.

    1997-01-01

    Maintenance and testing of reactor protection systems is an important cause of unplanned reactor trips. Automated testing is the answer because it minimises test times and reduces human error. The GAMA I system, developed and implemented at Vandellos II in Spain, has the added advantage that it uses visual programming, which means that changing the software does not need specialist programming skills. (author)

  7. Application of derivative spectrophotometry under orthogonal polynomial at unequal intervals: determination of metronidazole and nystatin in their pharmaceutical mixture.

    Science.gov (United States)

    Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I

    2015-05-15

    This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Responses to Error: Sentence-Level Error and the Teacher of Basic Writing

    Science.gov (United States)

    Foltz-Gray, Dan

    2012-01-01

    In this article, the author talks about sentence-level error, error in grammar, mechanics, punctuation, usage, and the teacher of basic writing. He states that communities are crawling with teachers and administrators and parents and state legislators and school board members who are engaged in sometimes rancorous debate over what to do about…

  9. Dynamically protected cat-qubits: a new paradigm for universal quantum computation

    International Nuclear Information System (INIS)

    Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V; Touzard, Steven; Schoelkopf, Robert J; Jiang, Liang; Devoret, Michel H

    2014-01-01

    We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner

  10. Dynamically protected cat-qubits: a new paradigm for universal quantum computation

    Science.gov (United States)

    Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.

    2014-04-01

    We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.

  11. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  12. Thermocouple Errors when Mounted on Cylindrical Surfaces in Abnormal Thermal Environments.

    Energy Technology Data Exchange (ETDEWEB)

    Nakos, James T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Suo-Anttila, Jill M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zepper, Ethan T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Koenig, Jerry J [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Valdez, Vincent A. [ECI Inc., Albuquerque, NM (United States)

    2017-05-01

    Mineral-insulated, metal-sheathed, Type-K thermocouples are used to measure the temperature of various items in high-temperature environments, often exceeding 1000degC (1273 K). The thermocouple wires (chromel and alumel) are protected from the harsh environments by an Inconel sheath and magnesium oxide (MgO) insulation. The sheath and insulation are required for reliable measurements. Due to the sheath and MgO insulation, the temperature registered by the thermocouple is not the temperature of the surface of interest. In some cases, the error incurred is large enough to be of concern because these data are used for model validation, and thus the uncertainties of the data need to be well documented. This report documents the error using 0.062" and 0.040" diameter Inconel sheathed, Type-K thermocouples mounted on cylindrical surfaces (inside of a shroud, outside and inside of a mock test unit). After an initial transient, the thermocouple bias errors typically range only about +-1-2% of the reading in K. After all of the uncertainty sources have been included, the total uncertainty to 95% confidence, for shroud or test unit TCs in abnormal thermal environments, is about +-2% of the reading in K, lower than the +-3% typically used for flat shrouds. Recommendations are provided in Section 6 to facilitate interpretation and use of the results. .

  13. A multibiometric face recognition fusion framework with template protection

    Science.gov (United States)

    Chindaro, S.; Deravi, F.; Zhou, Z.; Ng, M. W. R.; Castro Neves, M.; Zhou, X.; Kelkboom, E.

    2010-04-01

    In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy protection.

  14. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  15. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  16. Expert systems for protective monitoring of facilities

    International Nuclear Information System (INIS)

    Carr, K.R.

    1987-01-01

    In complex plants, the possibility of serious operator error always exists to some extent, but, this can be especially true during an experiment or some other unusual exercise. Possible contributing factors to operational error include personnel fatigue, misunderstanding in communication, mistakes in executing orders, uncertainty about the delegated authority, pressure to meet a demanding schedule, and a lack of understanding of the possible consequences of deliberate violations of the facility's established operating procedures. Authoritative reports indicate that most of these factors were involved in the disastrous Russian Chernobyl-4 nuclear reactor accident in April 1986, which, ironically, occurred when a safety experiment was being conducted. Given the computer hardware and software now available for implementing expert systems together with integrated signal monitoring and communications, plant protection could be enhanced by an expert system with extended features to monitor the plant. The system could require information from the operators on a rigidly enforced schedule and automatically log in and report on a scheduled time basis to authorities at a central remote site during periods of safe operation. Additionally, the system could warn an operator or automatically shut down the plant in case of dangerous conditions, while simultaneously notifying independent, responsible, off-site personnel of the action taken. This approach would provide protection beyond that provided by typical facility scram circuits. This paper presents such an approach to implementing an expert system for plant protection, together with specific hardware and software configurations. The Chernobyl accident is used as the basis of discussion

  17. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  18. Random Vibration of Space Shuttle Weather Protection Systems

    Directory of Open Access Journals (Sweden)

    Isaac Elishakoff

    1995-01-01

    Full Text Available The article deals with random vibrations of the space shuttle weather protection systems. The excitation model represents a fit to the measured experimental data. The cross-spectral density is given as a convex combination of three exponential functions. It is shown that for the type of loading considered, the Bernoulli-Euler theory cannot be used as a simplified approach, and the structure will be more properly modeled as a Timoshenko beam. Use of the simple Bernoulli-Euler theory may result in an error of about 50% in determining the mean-square value of the bending moment in the weather protection system.

  19. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  20. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  1. The error model and experiment of measuring angular position error based on laser collimation

    Science.gov (United States)

    Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo

    2018-01-01

    Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.

  2. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  3. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  4. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  5. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  6. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  7. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  8. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  9. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  10. Common Errors in Ecological Data Sharing

    Directory of Open Access Journals (Sweden)

    Robert B. Cook

    2013-04-01

    Full Text Available Objectives: (1 to identify common errors in data organization and metadata completeness that would preclude a “reader” from being able to interpret and re-use the data for a new purpose; and (2 to develop a set of best practices derived from these common errors that would guide researchers in creating more usable data products that could be readily shared, interpreted, and used.Methods: We used directed qualitative content analysis to assess and categorize data and metadata errors identified by peer reviewers of data papers published in the Ecological Society of America’s (ESA Ecological Archives. Descriptive statistics provided the relative frequency of the errors identified during the peer review process.Results: There were seven overarching error categories: Collection & Organization, Assure, Description, Preserve, Discover, Integrate, and Analyze/Visualize. These categories represent errors researchers regularly make at each stage of the Data Life Cycle. Collection & Organization and Description errors were some of the most common errors, both of which occurred in over 90% of the papers.Conclusions: Publishing data for sharing and reuse is error prone, and each stage of the Data Life Cycle presents opportunities for mistakes. The most common errors occurred when the researcher did not provide adequate metadata to enable others to interpret and potentially re-use the data. Fortunately, there are ways to minimize these mistakes through carefully recording all details about study context, data collection, QA/ QC, and analytical procedures from the beginning of a research project and then including this descriptive information in the metadata.

  11. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  12. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  13. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  14. Focusing errors in radiography - how they can be recognized and avoided. 2. rev. ed.

    International Nuclear Information System (INIS)

    Zimmer, E.A.; Zimmer-Brossy, M.

    1979-01-01

    The importance of the problem of recognizing and judging focusing errors for the daily practice has caused the authors to give this systematic and abundantly pictured account of the most frequent focusing errors, for as yet no such book has been published either in Germany or abroad. To keep it as concise and handy as possible the authors have restricted themselves to the most important standard pictures and omitted to list errors such as: blurred pictures owing to breathing or movement as well as under and overexposed pictures, which are easy to recognize and avoid. By contrast, they describe in detail those characteristic points and lines of orientation that must be checked to verify the technical quality of an X-ray picture. Knowledge of the typical aspects in a correctly focused X-ray picture is a precondition for understanding incorrectly focused pictures which have some characteristic properties as well. Proper interpretation of an incorrectly focused picture then permits to detect also the cause of the focusing error, be it false centring or false positioning. Thus quick and aimed correction becomes possible. To avoid unnecessary repeat X-rays, which are self-prohibitive for reasons of radiation protection alone, each chapter contains at the end a remark starting in which cases the medical indication requires the repetition of an unserviceable X-ray. (orig./ORU) [de

  15. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  16. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  17. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  18. Repeated speech errors: evidence for learning.

    Science.gov (United States)

    Humphreys, Karin R; Menzies, Heather; Lake, Johanna K

    2010-11-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  20. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  1. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  2. Different grades MEMS accelerometers error characteristics

    Science.gov (United States)

    Pachwicewicz, M.; Weremczuk, J.

    2017-08-01

    The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.

  3. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  4. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  5. Errors and Understanding: The Effects of Error-Management Training on Creative Problem-Solving

    Science.gov (United States)

    Robledo, Issac C.; Hester, Kimberly S.; Peterson, David R.; Barrett, Jamie D.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.

    2012-01-01

    People make errors in their creative problem-solving efforts. The intent of this article was to assess whether error-management training would improve performance on creative problem-solving tasks. Undergraduates were asked to solve an educational leadership problem known to call for creative thought where problem solutions were scored for…

  6. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  7. SPACE-BORNE LASER ALTIMETER GEOLOCATION ERROR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2018-05-01

    Full Text Available This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  8. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  9. Improving Type Error Messages in OCaml

    OpenAIRE

    Charguéraud , Arthur

    2015-01-01

    International audience; Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise ...

  10. Spectrum of diagnostic errors in radiology

    OpenAIRE

    Pinto, Antonio; Brunese, Luca

    2010-01-01

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff’s complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors ...

  11. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  12. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  13. Electronic error-reporting systems: a case study into the impact on nurse reporting of medical errors.

    Science.gov (United States)

    Lederman, Reeva; Dreyfus, Suelette; Matchan, Jessica; Knott, Jonathan C; Milton, Simon K

    2013-01-01

    Underreporting of errors in hospitals persists despite the claims of technology companies that electronic systems will facilitate reporting. This study builds on previous analyses to examine error reporting by nurses in hospitals using electronic media. This research asks whether the electronic media creates additional barriers to error reporting, and, if so, what practical steps can all hospitals take to reduce these barriers. This is a mixed-method case study nurses' use of an error reporting system, RiskMan, in two hospitals. The case study involved one large private hospital and one large public hospital in Victoria, Australia, both of which use the RiskMan medical error reporting system. Information technology-based error reporting systems have unique access problems and time demands and can encourage nurses to develop alternative reporting mechanisms. This research focuses on nurses and raises important findings for hospitals using such systems or considering installation. This article suggests organizational and technical responses that could reduce some of the identified barriers. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.

  14. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  15. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  16. Human errors, countermeasures for their prevention and evaluation

    International Nuclear Information System (INIS)

    Kohda, Takehisa; Inoue, Koichi

    1992-01-01

    The accidents originated in human errors have occurred as ever in recent large accidents such as the TMI accident and the Chernobyl accident. The proportion of the accidents originated in human errors is unexpectedly high, therefore, the reliability and safety of hardware are improved hereafter, but the improvement of human reliability cannot be expected. Human errors arise by the difference between the function required for men and the function actually accomplished by men, and the results exert some adverse effect to systems. Human errors are classified into design error, manufacture error, operation error, maintenance error, checkup error and general handling error. In terms of behavior, human errors are classified into forget to do, fail to do, do that must not be done, mistake in order and do at improper time. The factors in human error occurrence are circumstantial factor, personal factor and stress factor. As the method of analyzing and evaluating human errors, system engineering method such as probabilistic risk assessment is used. The technique for human error rate prediction, the method for human cognitive reliability, confusion matrix and SLIM-MAUD are also used. (K.I.)

  17. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  18. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  19. A definitive criterion for cathodic protection

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, Roger [Cathodic Protection Network International Ltd., Reading (United Kingdom)

    2009-07-01

    The corrosion reaction is defined using the Pourbaix Diagram and includes consideration of the pH, temperature, pressure, nobility of the metal and conductivity of the electrolyte. The passive zone can be established in a laboratory by creating a closed circuit condition in which the voltages can be measured. Natural corrosion cells occurring in simple conditions can be evaluated for the purpose of monitoring the performance of cathodic protection. Metal pipelines are complex networks of conductors submerged in electrolyte of infinitely variable qualities. The present method used to ascertain the effectiveness of cathodic protection has many inherent errors and results in costly and unpredictable corrosion failures. An electrode has been devised to define the exact electrical status of the corrosion reaction at its location. The design allows a closed circuit measurement of the corrosion current that can determine whether or not corrosion has been stopped by cathodic protection. This has allowed the development of software that can calculate the condition and corrosion status throughout a network of pipelines, using electrical circuit analysis common in the electronics industry. (author)

  20. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  1. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  2. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    on basis of reliability profiles for bridges without human errors are extended to include bridges with human errors. The first rehabilitation distributions for bridges without and with human errors are combined into a joint first rehabilitation distribution. The methodology presented is illustrated...... for reinforced concrete bridges....

  3. Interplay of Coulomb interactions and disorder in three-dimensional quadratic band crossings without time-reversal symmetry and with unequal masses for conduction and valence bands

    Science.gov (United States)

    Mandal, Ipsita; Nandkishore, Rahul M.

    2018-03-01

    Coulomb interactions famously drive three-dimensional quadratic band crossing semimetals into a non-Fermi liquid phase of matter. In a previous work [Nandkishore and Parameswaran, Phys. Rev. B 95, 205106 (2017), 10.1103/PhysRevB.95.205106], the effect of disorder on this non-Fermi liquid phase was investigated, assuming that the band structure was isotropic, assuming that the conduction and valence bands had the same band mass, and assuming that the disorder preserved exact time-reversal symmetry and statistical isotropy. It was shown that the non-Fermi liquid fixed point is unstable to disorder and that a runaway flow to strong disorder occurs. In this paper, we extend that analysis by relaxing the assumption of time-reversal symmetry and allowing the electron and hole masses to differ (but continuing to assume isotropy of the low energy band structure). We first incorporate time-reversal symmetry breaking disorder and demonstrate that there do not appear any new fixed points. Moreover, while the system continues to flow to strong disorder, time-reversal-symmetry-breaking disorder grows asymptotically more slowly than time-reversal-symmetry-preserving disorder, which we therefore expect should dominate the strong-coupling phase. We then allow for unequal electron and hole masses. We show that whereas asymmetry in the two masses is irrelevant in the clean system, it is relevant in the presence of disorder, such that the `effective masses' of the conduction and valence bands should become sharply distinct in the low-energy limit. We calculate the RG flow equations for the disordered interacting system with unequal band masses and demonstrate that the problem exhibits a runaway flow to strong disorder. Along the runaway flow, time-reversal-symmetry-preserving disorder grows asymptotically more rapidly than both time-reversal-symmetry-breaking disorder and the Coulomb interaction.

  4. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  5. Nuclear emergencies and protective actions

    International Nuclear Information System (INIS)

    Sjoeblom, Klaus

    1995-01-01

    Although technical improvements have increased the safety of new and old nuclear power plants, many simultaneous component failures and/or human errors are improbable but possible. Both the plant (on-site) and the nearby area (off-site) have emergency plans. Rescue service authorities are responsible of the off-site. The main protective actions are sheltering, evacuation and iodine ingestion. The Loviisa off-site emergency plan assumes that a major part of this population takes care of their own protective actions; Rescue service authorities can then concentrate on the coordination activities and to those people who need help. To be able to carry out the protective actions timely and effectively the people should have information on radiation risk and emergency planning. In case of a potential accident the local population should follow the rescue service information and know how to shelter and how to evacuate themselves. Though there are many stockpiles of iodine pellets in the area the rescue service authorities recommend that each household should purchase iodine pellets for their own need. The utility and the rescue service authorities have distributed information brochures to all homes within 30 km from Loviisa NPP since 1990. This brochure gives information on radiation and protective actions in case of an accident. Because the brochures might not stay available and so also the local telephone book contains this information

  6. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  7. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  8. Addis Ab•.~••lJmversi~

    African Journals Online (AJOL)

    Over the past six ttecades, the theory of unifonnly ... synthesis theory. Probably the first work on unequally spaced arrays has been carried out by Unz. [8], who developed a matrix fonnulation to obtain the current distribution necess3fY to generate a prescribed radiation ... minimize the mean-squared error between the.

  9. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  10. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  11. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  12. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  13. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  14. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  15. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  16. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  17. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  18. Design guides for cell atmosphere controls, utilities and fire protection

    International Nuclear Information System (INIS)

    Hill, A.J. Jr.; Peishel, F.L.; Slattery, E.F.

    1981-01-01

    Facilities for handling radioactive and toxic materials must be designed not only for efficient operation, but also for protection of the operating personnel and the public. The ventilation system is of primary importance in maintaining containment of any airborne radioactivity. The type, number, and location of in-cell services must be adequate for planned operations, but also must allow flexibility to accommodate expansion in the scope of operations or changes in programs. Fire protection systems and operational controls are mandatory to maintain containment of radioactivity in the event of an operating error or process accident that may result in a fire

  19. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  20. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  1. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  2. Learning from Errors: Effects of Teachers Training on Students' Attitudes towards and Their Individual Use of Errors

    Science.gov (United States)

    Rach, Stefanie; Ufer, Stefan; Heinze, Aiso

    2013-01-01

    Constructive error handling is considered an important factor for individual learning processes. In a quasi-experimental study with Grades 6 to 9 students, we investigate effects on students' attitudes towards errors as learning opportunities in two conditions: an error-tolerant classroom culture, and the first condition along with additional…

  3. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  4. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  5. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  6. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  7. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  8. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  9. Implications of spatial data variations for protected areas management: an example from East Africa.

    Science.gov (United States)

    Dowhaniuk, Nicholas; Hartter, Joel; Ryan, Sadie J

    2014-09-01

    Geographic information systems and remote sensing technologies have become an important tool for visualizing conservation management and developing solutions to problems associated with conservation. When multiple organizations separately develop spatial data representations of protected areas, implicit error arises due to variation between data sets. We used boundary data produced by three conservation organizations (International Union for the Conservation of Nature, World Resource Institute, and Uganda Wildlife Authority), for seven Ugandan parks, to study variation in the size represented and the location of boundaries. We found variation in the extent of overlapping total area encompassed by the three data sources, ranging from miniscule (0.4 %) differences to quite large ones (9.0 %). To underscore how protected area boundary discrepancies may have implications to protected area management, we used a landcover classification, defining crop, shrub, forest, savanna, and grassland. The total area in the different landcover classes varied most in smaller protected areas (those less than 329 km(2)), with forest and cropland area estimates varying up to 65 %. The discrepancies introduced by boundary errors could, in this hypothetical case, generate erroneous findings and could have a significant impact on conservation, such as local-scale management for encroachment and larger-scale assessments of deforestation.

  10. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  11. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  12. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  13. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  14. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  15. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  16. Breastfeeding Promotion, Support and Protection: Review of Six Country Programmes

    Directory of Open Access Journals (Sweden)

    Christiane Rudert

    2012-08-01

    Full Text Available Reviews of programmes in Bangladesh, Benin, the Philippines, Sri Lanka, Uganda, and Uzbekistan sought to identify health policy and programmatic factors that influenced breastfeeding practices during a 10 to 15 year period. Exclusive breastfeeding rates and trends were analysed in six countries in general and from an equity perspective in two of them. Success factors and challenges were identified in countries with improved and stagnated rates respectively. The disaggregated data analysis showed that progress may be unequal in population subgroups, but if appropriately designed and implemented, a programme can become a “health equalizer” and eliminate discrepancies among different subgroups. Success requires commitment, supportive policies, and comprehensiveness of programmes for breastfeeding promotion, protection and support. Community-based promotion and support was identified as a particularly important component. Although health workers’ training on infant feeding support and counselling was prioritized, further improvement of interpersonal counselling and problem solving skills is needed. More attention is advised for pre-service education, including a stronger focus on clinical practice, to ensure knowledge and skills among all health workers. Large-scale communication activities played a significant role, but essential steps were often underemphasized, including identifying social norms and influencing factors, ensuring community participation, and testing of approaches and messages.

  17. Breastfeeding promotion, support and protection: review of six country programmes.

    Science.gov (United States)

    Mangasaryan, Nune; Martin, Luann; Brownlee, Ann; Ogunlade, Adebayo; Rudert, Christiane; Cai, Xiaodong

    2012-08-01

    Reviews of programmes in Bangladesh, Benin, the Philippines, Sri Lanka, Uganda, and Uzbekistan sought to identify health policy and programmatic factors that influenced breastfeeding practices during a 10 to 15 year period. Exclusive breastfeeding rates and trends were analysed in six countries in general and from an equity perspective in two of them. Success factors and challenges were identified in countries with improved and stagnated rates respectively. The disaggregated data analysis showed that progress may be unequal in population subgroups, but if appropriately designed and implemented, a programme can become a "health equalizer" and eliminate discrepancies among different subgroups. Success requires commitment, supportive policies, and comprehensiveness of programmes for breastfeeding promotion, protection and support. Community-based promotion and support was identified as a particularly important component. Although health workers' training on infant feeding support and counselling was prioritized, further improvement of interpersonal counselling and problem solving skills is needed. More attention is advised for pre-service education, including a stronger focus on clinical practice, to ensure knowledge and skills among all health workers. Large-scale communication activities played a significant role, but essential steps were often underemphasized, including identifying social norms and influencing factors, ensuring community participation, and testing of approaches and messages.

  18. Breastfeeding Promotion, Support and Protection: Review of Six Country Programmes

    Science.gov (United States)

    Mangasaryan, Nune; Martin, Luann; Brownlee, Ann; Ogunlade, Adebayo; Rudert, Christiane; Cai, Xiaodong

    2012-01-01

    Reviews of programmes in Bangladesh, Benin, the Philippines, Sri Lanka, Uganda, and Uzbekistan sought to identify health policy and programmatic factors that influenced breastfeeding practices during a 10 to 15 year period. Exclusive breastfeeding rates and trends were analysed in six countries in general and from an equity perspective in two of them. Success factors and challenges were identified in countries with improved and stagnated rates respectively. The disaggregated data analysis showed that progress may be unequal in population subgroups, but if appropriately designed and implemented, a programme can become a “health equalizer” and eliminate discrepancies among different subgroups. Success requires commitment, supportive policies, and comprehensiveness of programmes for breastfeeding promotion, protection and support. Community-based promotion and support was identified as a particularly important component. Although health workers’ training on infant feeding support and counselling was prioritized, further improvement of interpersonal counselling and problem solving skills is needed. More attention is advised for pre-service education, including a stronger focus on clinical practice, to ensure knowledge and skills among all health workers. Large-scale communication activities played a significant role, but essential steps were often underemphasized, including identifying social norms and influencing factors, ensuring community participation, and testing of approaches and messages. PMID:23016128

  19. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  20. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Positive Beliefs about Errors as an Important Element of Adaptive Individual Dealing with Errors during Academic Learning

    Science.gov (United States)

    Tulis, Maria; Steuer, Gabriele; Dresel, Markus

    2018-01-01

    Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…

  2. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  3. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  4. The Unequal Structure of the German Education System: Structural Reasons for Educational Failures of Turkish Youth in Germany.

    Science.gov (United States)

    Fernandez-Kelly, Patricia

    The paper examines the educational experiences of Turkish youth in Germany with special references to the statistical data of Educational Report, PISA surveys. The results of the educational statistics of Germany show that more than group characteristics like social and cultural capital, structural and institutional factors (multi-track system with its selective mechanism, education policy, context of negative reception of Germany, institutional discrimination, and lack of intercultural curriculum) could have a decisive role in hampering the educational and labor market integration and social mobility of Turkish youth. This can be explained by a mix of factors: the education system which does not foster the educational progress of children from disadvantaged families; the high importance of school degrees for accessing to the vocational training system and the labor market; and direct and indirect institutional discrimination in educational area in Germany. Thus, this work suggests that the nature of the education system in Germany remains deeply "unequal," "hierarchical" and "exclusive." This study also demonstrates maintaining the marginalized position of Turkish children in Germany means that the country of origin or the immigrants' background is still a barrier to having access to education and the labor market of Germany.

  5. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  6. Automatic error compensation in dc amplifiers

    International Nuclear Information System (INIS)

    Longden, L.L.

    1976-01-01

    When operational amplifiers are exposed to high levels of neutron fluence or total ionizing dose, significant changes may be observed in input voltages and currents. These changes may produce large errors at the output of direct-coupled amplifier stages. Therefore, the need exists for automatic compensation techniques. However, previously introduced techniques compensate only for errors in the main amplifier and neglect the errors induced by the compensating circuitry. In this paper, the techniques introduced compensate not only for errors in the main operational amplifier, but also for errors induced by the compensation circuitry. Included in the paper is a theoretical analysis of each compensation technique, along with advantages and disadvantages of each. Important design criteria and information necessary for proper selection of semiconductor switches will also be included. Introduced in this paper will be compensation circuitry for both resistive and capacitive feedback networks

  7. El error en el delito imprudente

    Directory of Open Access Journals (Sweden)

    Miguel Angel Muñoz García

    2011-12-01

    Full Text Available La teoría del error en los delitos culposos constituye un tema álgido de tratar, y controversial en la dogmática penal: existen en realidad muy escasas referencias, y no se ha llegado a un consenso razonable. Partiendo del análisis de la estructura dogmática del delito imprudente, en donde se destaca el deber objetivo de cuidado como elemento del tipo sobre el que recae el error, y de las diferentes posiciones doctrinales que defienden la aplicabilidad del error de tipo y del error de prohibición, se plantea la viabilidad de este último, con fundamento en razones dogmáticas y de política criminal, siendo la infracción del deber objetivo de cuidado en tanto consecuencia del error, un tema por analizar en sede de culpabilidad.

  8. Targeting global protected area expansion for imperiled biodiversity.

    Science.gov (United States)

    Venter, Oscar; Fuller, Richard A; Segan, Daniel B; Carwardine, Josie; Brooks, Thomas; Butchart, Stuart H M; Di Marco, Moreno; Iwamura, Takuya; Joseph, Liana; O'Grady, Damien; Possingham, Hugh P; Rondinini, Carlo; Smith, Robert J; Venter, Michelle; Watson, James E M

    2014-06-01

    Governments have agreed to expand the global protected area network from 13% to 17% of the world's land surface by 2020 (Aichi target 11) and to prevent the further loss of known threatened species (Aichi target 12). These targets are interdependent, as protected areas can stem biodiversity loss when strategically located and effectively managed. However, the global protected area estate is currently biased toward locations that are cheap to protect and away from important areas for biodiversity. Here we use data on the distribution of protected areas and threatened terrestrial birds, mammals, and amphibians to assess current and possible future coverage of these species under the convention. We discover that 17% of the 4,118 threatened vertebrates are not found in a single protected area and that fully 85% are not adequately covered (i.e., to a level consistent with their likely persistence). Using systematic conservation planning, we show that expanding protected areas to reach 17% coverage by protecting the cheapest land, even if ecoregionally representative, would increase the number of threatened vertebrates covered by only 6%. However, the nonlinear relationship between the cost of acquiring land and species coverage means that fivefold more threatened vertebrates could be adequately covered for only 1.5 times the cost of the cheapest solution, if cost efficiency and threatened vertebrates are both incorporated into protected area decision making. These results are robust to known errors in the vertebrate range maps. The Convention on Biological Diversity targets may stimulate major expansion of the global protected area estate. If this expansion is to secure a future for imperiled species, new protected areas must be sited more strategically than is presently the case.

  9. Characteristics of medication errors with parenteral cytotoxic drugs

    OpenAIRE

    Fyhr, A; Akselsson, R

    2012-01-01

    Errors involving cytotoxic drugs have the potential of being fatal and should therefore be prevented. The objective of this article is to identify the characteristics of medication errors involving parenteral cytotoxic drugs in Sweden. A total of 60 cases reported to the national error reporting systems from 1996 to 2008 were reviewed. Classification was made to identify cytotoxic drugs involved, type of error, where the error occurred, error detection mechanism, and consequences for the pati...

  10. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  11. Optics measurement algorithms and error analysis for the proton energy frontier

    Science.gov (United States)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  12. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  13. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  14. Taming of the few-The unequal distribution of greenhouse gas emissions from personal travel in the UK

    International Nuclear Information System (INIS)

    Brand, Christian; Boardman, Brenda

    2008-01-01

    Greenhouse gas emissions from personal transport have risen steadily in the UK. Yet, surprisingly little is known about who exactly is contributing to the problem and the extent to which different groups of the population will be affected by any policy responses. This paper describes an innovative methodology and evaluation tool for profiling annual greenhouse gas emissions from personal travel across all modes of travel. A case study application of the methodology involving a major survey of UK residents provides an improved understanding of the extent to which individual and household travel activity patterns, choice of transport mode, geographical location, socio-economic and other factors impact on greenhouse gas emissions. Air and car travel dominate overall emissions. Conversely, land-based public transport accounts for a very small proportion of emissions on average. There is a highly unequal distribution of emissions amongst the population, independent of the mode of travel, location and unit of analysis. The top 10% of emitters are responsible for 43% of emissions and the bottom 10% for only 1%. Income, economic activity, age, household structure and car availability significantly influence emissions levels. Key policy implications of the results are discussed. The paper concludes by suggesting potential applications of the methodology and evaluation tool

  15. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  16. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  17. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  18. Robustness of a Neural Network Model for Power Peak Factor Estimation in Protection Systems

    International Nuclear Information System (INIS)

    Souza, Rose Mary G.P.; Moreira, Joao M.L.

    2006-01-01

    This work presents results of robustness verification of artificial neural network correlations that improve the real time prediction of the power peak factor for reactor protection systems. The input variables considered in the correlation are those available in the reactor protection systems, namely, the axial power differences obtained from measured ex-core detectors, and the position of control rods. The correlations, based on radial basis function (RBF) and multilayer perceptron (MLP) neural networks, estimate the power peak factor, without faulty signals, with average errors between 0.13%, 0.19% and 0.15%, and maximum relative error of 2.35%. The robustness verification was performed for three different neural network correlations. The results show that they are robust against signal degradation, producing results with faulty signals with a maximum error of 6.90%. The average error associated to faulty signals for the MLP network is about half of that of the RBF network, and the maximum error is about 1% smaller. These results demonstrate that MLP neural network correlation is more robust than the RBF neural network correlation. The results also show that the input variables present redundant information. The axial power difference signals compensate the faulty signal for the position of a given control rod, and improves the results by about 10%. The results show that the errors in the power peak factor estimation by these neural network correlations, even in faulty conditions, are smaller than the current PWR schemes which may have uncertainties as high as 8%. Considering the maximum relative error of 2.35%, these neural network correlations would allow decreasing the power peak factor safety margin by about 5%. Such a reduction could be used for operating the reactor with a higher power level or with more flexibility. The neural network correlation has to meet requirements of high integrity software that performs safety grade actions. It is shown that the

  19. Errors generated with the use of rectangular collimation

    International Nuclear Information System (INIS)

    Parks, E.T.

    1991-01-01

    This study was designed to determine whether various techniques for achieving rectangular collimation generate different numbers and types of errors and remakes and to determine whether operator skill level influences errors and remakes. Eighteen students exposed full-mouth series of radiographs on manikins with the use of six techniques. The students were grouped according to skill level. The radiographs were evaluated for errors and remakes resulting from errors in the following categories: cone cutting, vertical angulation, and film placement. Significant differences were found among the techniques in cone cutting errors and remakes, vertical angulation errors and remakes, and total errors and remakes. Operator skill did not appear to influence the number or types of errors or remakes generated. Rectangular collimation techniques produced more errors than did the round collimation techniques. However, only one rectangular collimation technique generated significantly more remakes than the other techniques

  20. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  1. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  2. Humanized Upbringing: A Turn to Power Relations and the Adult-Centered Paradigm in Institutions for the Protection of Children and Adolescents in Situation of Violation of Rights

    Directory of Open Access Journals (Sweden)

    Armando Zuluaga-Gómez

    2018-02-01

    Full Text Available This reflection is based on the notes recorded in a field journal and its objective is to systematize the experience acquired as an educator in the Diagnostic and Derivation Center, operated by the University of Antioquia through the Grow with Dignity Project (Zuluaga, 2015-2016, attached to the Unit of Childhood, in the City of Medellín, Colombia, whose purpose is the immediate protection of children and adolescents in situations of violation of rights. We will analyze, here, the power relations that are established within the adult-centered paradigm; we will reveal the genesis of child abuse in these relations, and we will see how these normalized practices in the upbringing of children by their families of origin permeate the protection institutions that have been created to accomplish processes of restoration of rights. When unequal power relationships are instituted and legitimated within the family, the hegemony of adults over childhood is consolidated, and the latter ends up being objectified, like this normalizing their abuse. These relational paradigms are also susceptible to reproduction in educational institutions, including those aimed at the protection of children in situations of violation of rights. We will suggest a proposal called humanized reeducation, which is indicated for group leadership in protection institutions, a task entrusted to educators.

  3. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  4. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  5. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  6. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  7. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  8. Two heads are better than one: the association between condom decision-making and condom use errors and problems.

    Science.gov (United States)

    Crosby, R; Milhausen, R; Sanders, S A; Graham, C A; Yarber, W L

    2008-06-01

    This exploratory study compared the frequency of condom use errors and problems between men reporting that condom use for penile-vaginal sex was a mutual decision compared with men making the decision unilaterally. Nearly 2000 people completed a web-based questionnaire. A sub-sample of 660 men reporting that they last used a condom for penile-vaginal sex (within the past three months) was analysed. Nine condom use errors/problems were assessed. Multivariate analyses controlled for men's age, marital status, and level of experience using condoms. Men's unilateral decision-making was associated with increased odds of removing condoms before sex ended (adjusted odds ratio (AOR) 2.51, p = 0.002), breakage (AOR 3.90, p = 0.037), and slippage during withdrawal (AOR 2.04, p = 0.019). Men's self-reported level of experience using condoms was significantly associated with seven out of nine errors/problems, with those indicating less experience consistently reporting more errors/problems. Findings suggest that female involvement in the decision to use condoms for penile-vaginal sex may be partly protective against some condom errors/problems. Men's self-reported level of experience using condoms may be a useful indicator of the need for education designed to promote the correct use of condoms. Education programmes may benefit men by urging them to involve their female partner in condom use decisions.

  9. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  10. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  11. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  12. A flavor protection for warped Higgsless models

    International Nuclear Information System (INIS)

    Csaki, Csaba; Curtin, David

    2009-01-01

    We examine various possibilities for realistic 5D Higgsless models on a Randall-Sundrum (RS) background, and construct a full quark sector featuring next-to-minimal flavor violation (with an exact bulk SU(2) protecting the first two generations) which satisfies electroweak and flavor constraints. The 'new custodially protected representation' is used for the third generation to protect the light quarks from flavor violations induced due to the heavy top. A combination of flavor symmetries, and an 'RS-GIM' mechanism for the right-handed quarks suppresses flavor-changing neutral currents below experimental bounds, assuming Cabibbo-Kobayashi-Maskawa-type mixing on the UV brane. In addition to the usual Higgsless RS signals, this model predicts an exotic charge-5/3 quark with mass of about 0.5 TeV which should show up at the LHC very quickly, as well as nonzero flavor-changing neutral currents which could be detected in the next generation of flavor experiments. In the course of our analysis, we also find quantitative estimates for the errors of the fermion zero-mode approximation, which are significant for Higgsless-type models.

  13. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  14. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  15. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  16. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  17. Error Analysis in a Written Composition Análisis de errores en una composición escrita

    Directory of Open Access Journals (Sweden)

    David Alberto Londoño Vásquez

    2008-12-01

    Full Text Available Learners make errors in both comprehension and production. Some theoreticians have pointed out the difficulty of assigning the cause of failures in comprehension to an inadequate knowledge of a particular syntactic feature of a misunderstood utterance. Indeed, an error can be defined as a deviation from the norms of the target language. In this investigation, based on personal and professional experience, a written composition entitled "My Life in Colombia" will be analyzed based on clinical elicitation (CE research. CE involves getting the informant to produce data of any sort, for example, by means of a general interview or by asking the learner to write a composition. Some errors produced by a foreign language learner in her acquisition process will be analyzed, identifying the possible sources of these errors. Finally, four kinds of errors are classified: omission, addition, misinformation, and misordering.Los aprendices comenten errores tanto en la comprensión como en la producción. Algunos teóricos han identificado que la dificultad para clasificar las diferentes fallas en comprensión se debe al conocimiento inadecuado de una característica sintáctica particular. Por tanto, el error puede definirse como una desviación de las normas del idioma objetivo. En esta experiencia profesional se analizará una composición escrita sobre "Mi vida en Colombia" con base en la investigación a través de la elicitación clínica (EC. Esta se centra en cómo el informante produce datos de cualquier tipo, por ejemplo, a través de una entrevista general o solicitándole al aprendiz una composición escrita. Se analizarán algunos errores producidos por un aprendiz de una lengua extranjera en su proceso de adquisición, identificando sus posibles causas. Finalmente, se clasifican cuatro tipos de errores: omisión, adición, desinformación y yuxtaposición sintáctica.

  18. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  19. Medication errors reported to the National Medication Error Reporting System in Malaysia: a 4-year retrospective review (2009 to 2012).

    Science.gov (United States)

    Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M

    2016-12-01

    Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.

  20. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  1. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  2. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  3. Radiology errors: are we learning from our mistakes?

    International Nuclear Information System (INIS)

    Mankad, K.; Hoey, E.T.D.; Jones, J.B.; Tirukonda, P.; Smith, J.T.

    2009-01-01

    Aim: To question practising radiologists and radiology trainees at a large international meeting in an attempt to survey individuals about error reporting. Materials and methods: Radiologists attending the 2007 Radiological Society of North America (RSNA) annual meeting were approached to fill in a written questionnaire. Participants were questioned as to their grade, country in which they practised, and subspecialty interest. They were asked whether they kept a personal log of their errors (with an error defined as 'a mistake that has management implications for the patient'), how many errors they had made in the preceding 12 months, and the types of errors that had occurred. They were also asked whether their local department held regular discrepancy/errors meetings, how many they had attended in the preceding 12 months, and the perceived atmosphere at these meetings (on a qualitative scale). Results: A total of 301 radiologists with a wide range of specialty interests from 32 countries agreed to take part. One hundred and sixty-six of 301 (55%) of responders were consultant/attending grade. One hundred and thirty-five of 301 (45%) were residents/fellows. Fifty-nine of 301 (20%) of responders kept a personal record of their errors. The number of errors made per person per year ranged from none (2%) to 16 or more (7%). The majority (91%) reported making between one and 15 errors/year. Overcalls (40%), under-calls (25%), and interpretation error (15%) were the predominant error types. One hundred and seventy-eight of 301 (59%) of participants stated that their department held regular errors meeting. One hundred and twenty-seven of 301 (42%) had attended three or more meetings in the preceding year. The majority (55%) who had attended errors meetings described the atmosphere as 'educational.' Only a small minority (2%) described the atmosphere as 'poor' meaning non-educational and/or blameful. Conclusion: Despite the undeniable importance of learning from errors

  4. Demonstration of Protection of a Superconducting Qubit from Energy Decay

    Science.gov (United States)

    Lin, Yen-Hsiang; Nguyen, Long B.; Grabon, Nicholas; San Miguel, Jonathan; Pankratova, Natalia; Manucharyan, Vladimir E.

    2018-04-01

    Long-lived transitions occur naturally in atomic systems due to the abundance of selection rules inhibiting spontaneous emission. By contrast, transitions of superconducting artificial atoms typically have large dipoles, and hence their lifetimes are determined by the dissipative environment of a macroscopic electrical circuit. We designed a multilevel fluxonium artificial atom such that the qubit's transition dipole can be exponentially suppressed by flux tuning, while it continues to dispersively interact with a cavity mode by virtual transitions to the noncomputational states. Remarkably, energy decay time T1 grew by 2 orders of magnitude, proportionally to the inverse square of the transition dipole, and exceeded the benchmark value of T1>2 ms (quality factor Q1>4 ×107) without showing signs of saturation. The dephasing time was limited by the first-order coupling to flux noise to about 4 μ s . Our circuit validated the general principle of hardware-level protection against bit-flip errors and can be upgraded to the 0 -π circuit [P. Brooks, A. Kitaev, and J. Preskill, Phys. Rev. A 87, 052306 (2013), 10.1103/PhysRevA.87.052306], adding protection against dephasing and certain gate errors.

  5. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  6. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  7. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  8. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  9. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  10. Error and discrepancy in radiology: inevitable or avoidable?

    Science.gov (United States)

    Brady, Adrian P

    2017-02-01

    Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.

  11. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  12. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  13. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  14. A fingerprint key binding algorithm based on vector quantization and error correction

    Science.gov (United States)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  15. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  16. Valuing Errors for Learning: Espouse or Enact?

    Science.gov (United States)

    Grohnert, Therese; Meuwissen, Roger H. G.; Gijselaers, Wim H.

    2017-01-01

    Purpose: This study aims to investigate how organisations can discourage covering up and instead encourage learning from errors through a supportive learning from error climate. In explaining professionals' learning from error behaviour, this study distinguishes between espoused (verbally expressed) and enacted (behaviourally expressed) values…

  17. Failsafe design criteria for computer based reactor protection systems

    International Nuclear Information System (INIS)

    Keats, A.B.

    1980-01-01

    The design criteria proposed are an extrapolation of the failsafe mode of operation used in the UK in hardwired reactor protection systems. This is achieved by making the operational condition of the reactor dependent upon an energetic state of the protection system components. An important objective of the proposed design criteria is to eliminate, or at least to minimize, the need for a failure-mode-and-effect-analysis (FMEA) of the computer software. This demands some well defined but simple constraints upon the way in which data are stored in the computers, but the objective is achieved almost entirely by hardware properties of the system. The first of these is the systematic use of hardwired test inputs which cause transient excursions into the tripped state in a uniquely ordered but easily recognizable sequence. The second is the use of hardwired pattern recognition logic which generates a dynamic healthy stimulus for the shutdown actuators only in response to the unique sequence formed by the hardwired input signal pattern. It therefore detects abnormal states of any of the system inputs, software errors, wiring errors and hardware failures. This hardwired logic is conceptually simple, failsafe, and is amenable to simple FMEA. (U.K.)

  18. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    Science.gov (United States)

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  19. Medication errors : the impact of prescribing and transcribing errors on preventable harm in hospitalised patients

    NARCIS (Netherlands)

    van Doormaal, J.E.; van der Bemt, P.M.L.A.; Mol, P.G.M.; Egberts, A.C.G.; Haaijer-Ruskamp, F.M.; Kosterink, J.G.W.; Zaal, Rianne J.

    Background: Medication errors (MEs) affect patient safety to a significant extent. Because these errors can lead to preventable adverse drug events (pADEs), it is important to know what type of ME is the most prevalent cause of these pADEs. This study determined the impact of the various types of

  20. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  1. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  2. Chernobyl - system accident or human error?

    International Nuclear Information System (INIS)

    Stang, E.

    1996-01-01

    Did human error cause the Chernobyl disaster? The standard point of view is that operator error was the root cause of the disaster. This was also the view of the Soviet Accident Commission. The paper analyses the operator errors at Chernobyl in a system context. The reactor operators committed errors that depended upon a lot of other failures that made up a complex accident scenario. The analysis is based on Charles Perrow's analysis of technological disasters. Failure possibility is an inherent property of high-risk industrial installations. The Chernobyl accident consisted of a chain of events that were both extremely improbable and difficult to predict. It is not reasonable to put the blame for the disaster on the operators. (author)

  3. List of Error-Prone Abbreviations, Symbols, and Dose Designations

    Science.gov (United States)

    ... Analysis and Coaching Report an Error Report a Medication Error Report a Vaccine Error Consumer Error Reporting Search ... which have been reported through the ISMP National Medication Errors Reporting Program (ISMP MERP) as being frequently misinterpreted ...

  4. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  5. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  6. Barriers to medication error reporting among hospital nurses.

    Science.gov (United States)

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication

  7. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  8. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  9. Can human error theory explain non-adherence?

    Science.gov (United States)

    Barber, Nick; Safdar, A; Franklin, Bryoney D

    2005-08-01

    To apply human error theory to explain non-adherence and examine how well it fits. Patients who were taking chronic medication were telephoned and asked whether they had been adhering to their medicine, and if not the reasons were explored and analysed according to a human error theory. Of 105 patients, 87 were contacted by telephone and they took part in the study. Forty-two recalled being non-adherent, 17 of them in the last 7 days; 11 of the 42 were intentionally non-adherent. The errors could be described by human error theory, and it explained unintentional non-adherence well, however, the application of 'rules' was difficult when considering mistakes. The consideration of error producing conditions and latent failures also revealed useful contributing factors. Human error theory offers a new and valuable way of understanding non-adherence, and could inform interventions. However, the theory needs further development to explain intentional non-adherence.

  10. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  11. Data contributed by EPA/ORD/NERL/CED researchers to the manuscript "Advanced Error Diagnostics of the CMAQ and CHIMERE modeling systems within the AQMEII3 Model Evaluation Framework"

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset contains the data contributed by EPA/ORD/NERL/CED researchers to the manuscript "Advanced Error Diagnostics of the CMAQ and CHIMERE modeling systems...

  12. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  13. Everyday memory errors in older adults.

    Science.gov (United States)

    Ossher, Lynn; Flegal, Kristin E; Lustig, Cindy

    2013-01-01

    Despite concern about cognitive decline in old age, few studies document the types and frequency of memory errors older adults make in everyday life. In the present study, 105 healthy older adults completed the Everyday Memory Questionnaire (EMQ; Sunderland, Harris, & Baddeley, 1983 , Journal of Verbal Learning and Verbal Behavior, 22, 341), indicating what memory errors they had experienced in the last 24 hours, the Memory Self-Efficacy Questionnaire (MSEQ; West, Thorn, & Bagwell, 2003 , Psychology and Aging, 18, 111), and other neuropsychological and cognitive tasks. EMQ and MSEQ scores were unrelated and made separate contributions to variance on the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975 , Journal of Psychiatric Research, 12, 189), suggesting separate constructs. Tip-of-the-tongue errors were the most commonly reported, and the EMQ Faces/Places and New Things subscales were most strongly related to MMSE. These findings may help training programs target memory errors commonly experienced by older adults, and suggest which types of memory errors could indicate cognitive declines of clinical concern.

  14. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  15. Field errors in hybrid insertion devices

    International Nuclear Information System (INIS)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed

  16. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  17. Telemetry location error in a forested habitat

    Science.gov (United States)

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  18. Error or "act of God"? A study of patients' and operating room team members' perceptions of error definition, reporting, and disclosure.

    Science.gov (United States)

    Espin, Sherry; Levinson, Wendy; Regehr, Glenn; Baker, G Ross; Lingard, Lorelei

    2006-01-01

    Calls abound for a culture change in health care to improve patient safety. However, effective change cannot proceed without a clear understanding of perceptions and beliefs about error. In this study, we describe and compare operative team members' and patients' perceptions of error, reporting of error, and disclosure of error. Thirty-nine interviews of team members (9 surgeons, 9 nurses, 10 anesthesiologists) and patients (11) were conducted at 2 teaching hospitals using 4 scenarios as prompts. Transcribed responses to open questions were analyzed by 2 researchers for recurrent themes using the grounded-theory method. Yes/no answers were compared across groups using chi-square analyses. Team members and patients agreed on what constitutes an error. Deviation from standards and negative outcome were emphasized as definitive features. Patients and nurse professionals differed significantly in their perception of whether errors should be reported. Nurses were willing to report only events within their disciplinary scope of practice. Although most patients strongly advocated full disclosure of errors (what happened and how), team members preferred to disclose only what happened. When patients did support partial disclosure, their rationales varied from that of team members. Both operative teams and patients define error in terms of breaking the rules and the concept of "no harm no foul." These concepts pose challenges for treating errors as system failures. A strong culture of individualism pervades nurses' perception of error reporting, suggesting that interventions are needed to foster collective responsibility and a constructive approach to error identification.

  19. The role of usability in the evaluation of accidents: human error or design flaw?

    Science.gov (United States)

    Correia, Walter; Soares, Marcelo; Barros, Marina; Campos, Fábio

    2012-01-01

    This article aims to highlight the role of consumer products companies in the heart and the extent of accidents involving these types of products, and as such undesired events take part as an agent in influencing decision making for the purchase of a product that nature on the part of consumers and users. The article demonstrates, by reference, interviews and case studies such as the development of poorly designed products and design errors of design can influence the usage behavior of users, thus leading to accidents, and also negatively affect the next image of a company. The full explanation of these types of questions aims to raise awareness, plan on a reliable usability, users and consumers in general about the safe use of consumer products, and also safeguard their rights before a legal system of consumer protection, even far away by the CDC--Code of Consumer Protection.

  20. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique