WorldWideScience

Sample records for spoken error correction

  1. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  2. Linguistic adaptations during spoken and multimodal error resolution.

    Science.gov (United States)

    Oviatt, S; Bernard, J; Levow, G A

    1998-01-01

    Fragile error handling in recognition-based systems is a major problem that degrades their performance, frustrates users, and limits commercial potential. The aim of the present research was to analyze the types and magnitude of linguistic adaptation that occur during spoken and multimodal human-computer error resolution. A semiautomatic simulation method with a novel error-generation capability was used to collect samples of users' spoken and pen-based input immediately before and after recognition errors, and at different spiral depths in terms of the number of repetitions needed to resolve an error. When correcting persistent recognition errors, results revealed that users adapt their speech and language in three qualitatively different ways. First, they increase linguistic contrast through alternation of input modes and lexical content over repeated correction attempts. Second, when correcting with verbatim speech, they increase hyperarticulation by lengthening speech segments and pauses, and increasing the use of final falling contours. Third, when they hyperarticulate, users simultaneously suppress linguistic variability in their speech signal's amplitude and fundamental frequency. These findings are discussed from the perspective of enhancement of linguistic intelligibility. Implications are also discussed for corroboration and generalization of the Computer-elicited Hyperarticulate Adaptation Model (CHAM), and for improved error handling capabilities in next-generation spoken language and multimodal systems.

  3. "Context and Spoken Word Recognition in a Novel Lexicon": Correction

    Science.gov (United States)

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2009-01-01

    Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…

  4. Verb Errors in Advanced Spoken English

    Directory of Open Access Journals (Sweden)

    Tomáš Gráf

    2017-07-01

    Full Text Available As an experienced teacher of advanced learners of English I am deeply aware of recurrent problems which these learners experience as regards grammatical accuracy. In this paper, I focus on researching inaccuracies in the use of verbal categories. I draw the data from a spoken learner corpus LINDSEI_CZ and analyze the performance of 50 advanced (C1–C2 learners of English whose mother tongue is Czech. The main method used is Computer-aided Error Analysis within the larger framework of Learner Corpus Research. The results reveal that the key area of difficulty is the use of tenses and tense agreements, and especially the use of the present perfect. Other error-prone aspects are also described. The study also identifies a number of triggers which may lie at the root of the problems. The identification of these triggers reveals deficiencies in the teaching of grammar, mainly too much focus on decontextualized practice, use of potentially confusing rules, and the lack of attempt to deal with broader notions such as continuity and perfectiveness. Whilst the study is useful for the teachers of advanced learners, its pedagogical implications stretch to lower levels of proficiency as well.

  5. A Classroom Research Study on Oral Error Correction

    Science.gov (United States)

    Coskun, Abdullah

    2010-01-01

    This study has the main objective to present the findings of a small-scale classroom research carried out to collect data about my spoken error correction behaviors by means of self-observation. With this study, I aimed to analyze how and which spoken errors I corrected during a specific activity in a beginner's class. I used Lyster and Ranta's…

  6. Reactions of EFL Students to Oral Error Correction.

    Science.gov (United States)

    Bang, Young-Joo

    1999-01-01

    Investigated college students' attitudes and preferences toward error correction in the English-as-a-Foreign-Language (EFL) classroom. A questionnaire was administered to 100 EFL students enrolled in spoken-English classes at a university.(Author/VWL)

  7. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    Science, Bangalore. Her interests are in. Theoretical Computer. Science. SERIES I ARTICLE. Error Correcting Codes. 2. The Hamming Codes. Priti Shankar. In the first article of this series we showed how redundancy introduced into a message transmitted over a noisy channel could improve the reliability of transmission. In.

  8. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March 1997 pp 33-47. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/03/0033-0047 ...

  9. Error Correcting Codes

    Indian Academy of Sciences (India)

    focused pictures of Triton, Neptune's largest moon. This great feat was in no small measure due to the fact that the sophisticated communication system on Voyager had an elaborate error correcting scheme built into it. At Jupiter and Saturn, a convolutional code was used to enhance the reliability of transmission, and at ...

  10. Error Correcting Codes

    Indian Academy of Sciences (India)

    It was engineering on the grand scale. - the use of new material for .... ROAD REPAIRSCE!STOP}!TL.,ZBFALK where errors occur in both the message as well as the check symbols, the decoder would be able to correct all of these (as there are not more than 8 .... before it is conveyed to the master disc. Modulation caters for.

  11. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  12. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  13. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  14. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals......, which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...

  15. Enhancing spoken connected-digit recognition accuracy by error ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    In http://web.syr.edu/ rrosenqu/ecc/main.htm. Sethi A, Rajaraman V, Kenjale P 1978 An error-correcting coding system for alphanumeric data. Inf. Process. Lett. 7: 72–77. Wagner N, Putter P 1989 Error detecting decimal digits. Commun. ACM 32: 106–110. Wagner N R 2002 The laws of cryptography: Coping with decimal ...

  16. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  17. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  18. Error Correcting Codes

    Indian Academy of Sciences (India)

    sound quality is, in essence, obtained by accurate waveform coding and decoding of the audio signals. In addition, the coded audio information is protected against disc errors by the use of a Cross Interleaved Reed-Solomon Code (CIRC). Reed-. Solomon codes were discovered by Irving Reed and Gus Solomon in 1960.

  19. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  20. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  1. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  2. Opportunistic Error Correction for MIMO

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    In this paper, we propose an energy-efficient scheme to reduce the power consumption of ADCs in MIMO-OFDM systems. The proposed opportunistic error correction scheme is based on resolution adaptive ADCs and fountain codes. The key idea is to transmit a fountain-encoded packet over one single

  3. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...

  4. Nonconvex Compressed Sensing and Error Correction

    National Research Council Canada - National Science Library

    Chartrand, Rick

    2007-01-01

    .... In this paper we consider a nonconvex extension. In the context of sparse error correction, we perform numerical experiments that show that for a fixed number of measurements, errors of larger support can be corrected in the nonconvex case...

  5. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  6. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  7. Using Online Annotations to Support Error Correction and Corrective Feedback

    Science.gov (United States)

    Yeh, Shiou-Wen; Lo, Jia-Jiunn

    2009-01-01

    Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…

  8. Error Awareness and Recovery in Conversational Spoken Language Interfaces

    Science.gov (United States)

    2007-05-01

    stutters , false starts, repairs, hesitations, filled pauses, and various other non-lexical acoustic events. Under these circumstances, it is not...sensible choice from a software engineering perspective. The case for separating out various task-independent aspects of the conversation has in fact been...in behav- ior both within and across systems. It also represents a more sensible solution from a software engi- The RavenClaw error handling

  9. Immediate error correction process following sleep deprivation.

    Science.gov (United States)

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation.

  10. Corrective Feedback, Spoken Accuracy and Fluency, and the Trade-Off Hypothesis

    Science.gov (United States)

    Chehr Azad, Mohammad Hassan; Farrokhi, Farahman; Zohrabi, Mohammad

    2018-01-01

    The current study was an attempt to investigate the effects of different corrective feedback (CF) conditions on Iranian EFL learners' spoken accuracy and fluency (AF) and the trade-off between them. Consequently, four pre-intermediate intact classes were randomly selected as the control, delayed explicit metalinguistic CF, extensive recast, and…

  11. Passive quantum error correction with linear optics

    International Nuclear Information System (INIS)

    Barbosa de Brito, Daniel; Viana Ramos, Rubens

    2006-01-01

    Recently it was proposed by Kalamidas in [D. Kalamidas, Phys. Lett. A 343 (2005) 331] an optical set-up able to correct single qubit errors using Pockels cells. In this work, we present a different set-up able to realize error correction passively, in the sense that none external action is needed

  12. A Hybrid Approach for Correcting Grammatical Errors

    Science.gov (United States)

    Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2015-01-01

    This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…

  13. Notions of "Error" and Appropriate Corrective Treatment.

    Science.gov (United States)

    Lee, Nancy

    1990-01-01

    The relationship between the notion of "error" in linguistics and language teaching theory and its potential application to error correction in the second language classroom is examined. Definitions of "error" in psycholinguistics, native speech, and English second language instruction are discussed, and the relationship of interlanguage…

  14. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  15. Spoken Dialogue Systems

    CERN Document Server

    Jokinen, Kristiina

    2009-01-01

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides

  16. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  17. Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment

    OpenAIRE

    Chaiwat Tantarangsee

    2014-01-01

    The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the cours...

  18. Survey of Radar Refraction Error Corrections

    Science.gov (United States)

    2016-11-01

    RANGE YUMA PROVING GROUND NAVAL AIR WARFARE CENTER AIRCRAFT DIVISION NAVAL AIR WARFARE CENTER WEAPONS DIVISION NAVAL UNDERSEA WARFARE CENTER...estimation for an electromagnetic wave propagating at radio frequencies through the earth’s atmosphere. Appendices contain descriptive material on the...of Radar Refraction Error Corrections, RCC 266-16 vii Acronyms BAE BAE Systems CRPL Central Radio Propagation Laboratory EM electromagnetic

  19. Consciousness-Raising, Error Correction and Proofreading

    Science.gov (United States)

    O'Brien, Josephine

    2015-01-01

    The paper discusses the impact of developing a consciousness-raising approach in error correction at the sentence level to improve students' proofreading ability. Learners of English in a foreign language environment often rely on translation as a composing tool and while this may act as a scaffold and provide some support, it frequently leads to…

  20. The Mathematics of Error Correcting Quantum Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. The Mathematics of Error Correcting-Quantum Codes - Quantum Probability. K R Parthasarathy. General Article Volume 6 Issue 3 March 2001 pp 34-45. Fulltext. Click here to view fulltext PDF. Permanent link:

  1. Quantum Steganography and Quantum Error-Correction

    Science.gov (United States)

    Shaw, Bilal A.

    2010-01-01

    Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be…

  2. Goldmann tonometer error correcting prism: clinical evaluation

    Directory of Open Access Journals (Sweden)

    McCafferty S

    2017-05-01

    Full Text Available Sean McCafferty,1–3 Garrett Lim,2 William Duncan,2 Eniko T Enikov,4 Jim Schwiegerling,1 Jason Levine,1,3 Corin Kew3 1Department of Ophthalmology, College of Optical Science, University of Arizona, 2Intuor Technologies, 3Arizona Eye Consultants, 4Department of Aerospace and Mechanical, College of Engineering, University of Arizona, Tucson, AZ, USA Purpose: Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics.Methods: A modified Goldmann prism with a correcting applanation tonometry surface (CATS was mathematically optimized to minimize the intraocular pressure (IOP measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature.Results: The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated.Conclusion: The results validate the CATS prism’s improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation. Keywords: glaucoma, tonometry, Goldmann, IOP, intraocular pressure, appalnation tonometer, corneal biomechanics, CATS tonometer, CCT, central corneal thickness, tonometer error 

  3. ecco: An error correcting comparator theory.

    Science.gov (United States)

    Ghirlanda, Stefano

    2018-03-08

    Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Black Holes, Holography, and Quantum Error Correction

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions?  How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator?  Why do such things happen only in gravitational theories?  In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence.  No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.  

  5. Tensor Networks and Quantum Error Correction

    Science.gov (United States)

    Ferris, Andrew J.; Poulin, David

    2014-07-01

    We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.

  6. Dimensional jump in quantum error correction

    International Nuclear Information System (INIS)

    Bombín, Héctor

    2016-01-01

    Topological stabilizer codes with different spatial dimensions have complementary properties. Here I show that the spatial dimension can be switched using gauge fixing. Combining 2D and 3D gauge color codes in a 3D qubit lattice, fault-tolerant quantum computation can be achieved with constant time overhead on the number of logical gates, up to efficient global classical computation, using only local quantum operations. Single-shot error correction plays a crucial role. (paper)

  7. Dimensional jump in quantum error correction

    Science.gov (United States)

    Bombín, Héctor

    2016-04-01

    Topological stabilizer codes with different spatial dimensions have complementary properties. Here I show that the spatial dimension can be switched using gauge fixing. Combining 2D and 3D gauge color codes in a 3D qubit lattice, fault-tolerant quantum computation can be achieved with constant time overhead on the number of logical gates, up to efficient global classical computation, using only local quantum operations. Single-shot error correction plays a crucial role.

  8. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  9. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  10. Error Correction in Oral Classroom English Teaching

    Science.gov (United States)

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  11. Joint Schemes for Physical Layer Security and Error Correction

    Science.gov (United States)

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  12. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  13. The role of error correction in communicative second language teaching

    OpenAIRE

    H. Ludolph Botha

    2013-01-01

    According to recent rese~rch, correction of errors in both oral and written communication does little to a~d language proficiency in the second language. In the Natural Approach of Krashen and Terrell the emphasis is on the acquisition of informal communication. Because the message and the understanding of the message remain of utmost importance, error correction is avoided. In Suggestopedia where the focus is also on communication, error correction is avoided as it inhibits the pupil. Onlang...

  14. Second Language Learners' Beliefs about Grammar Instruction and Error Correction

    Science.gov (United States)

    Loewen, Shawn; Li, Shaofeng; Fei, Fei; Thompson, Amy; Nakatsukasa, Kimi; Ahn, Seongmee; Chen, Xiaoqing

    2009-01-01

    Learner beliefs are an important individual difference in second language (L2) learning. Furthermore, an ongoing debate surrounds the role of grammar instruction and error correction in the L2 classroom. Therefore, this study investigated the beliefs of L2 learners regarding the controversial role of grammar instruction and error correction. A…

  15. Raptor Codes for Use in Opportunistic Error Correction

    NARCIS (Netherlands)

    Zijnge, T.; Goseling, Jasper; Weber, Jos H.; Schiphorst, Roelof; Shao, X.; Slump, Cornelis H.

    2010-01-01

    In this paper a Raptor code is developed and applied in an opportunistic error correction (OEC) layer for Coded OFDM systems. Opportunistic error correction [3] tries to recover information when it is available with the least effort. This is achieved by using Fountain codes in a COFDM system, which

  16. Long Burst Error Correcting Codes, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Long burst error mitigation is an enabling technology for the use of Ka band for high rate commercial and government users. Multiple NASA, government, and commercial...

  17. Time-dependent phase error correction using digital waveform synthesis

    Science.gov (United States)

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  18. Energy efficiency of error correction on wireless systems

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    1999-01-01

    Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.

  19. Comments on "A New Random-Error-Correction Code"

    DEFF Research Database (Denmark)

    Paaske, Erik

    1979-01-01

    This correspondence investigates the error propagation properties of six different systems using a (12, 6) systematic double-error-correcting convolutional encoder and a one-step majority-logic feedback decoder. For the generally accepted assumption that channel errors are much more likely to occur...

  20. Student reflections following teacher correction of oral errors

    OpenAIRE

    Dirim, Nazlı

    1999-01-01

    Ankara : The Institute of Economics and Social Sciences of Bilkent University, 1999. Thesis (Master's) -- Bilkent University, 1999. Includes bibliographical references leaves69-71 The teacher’s correction techniques can determine how students approach language learning. In order to understand the effect of oral error correction on students, we should know how students feel. The purpose of this study was to investigate one teacher’s correction of students’ oral errors, the...

  1. Correcting false memories: Errors must be noticed and replaced.

    Science.gov (United States)

    Mullet, Hillary G; Marsh, Elizabeth J

    2016-04-01

    Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.

  2. Flatfield correction errors due to spectral mismatching

    Science.gov (United States)

    Hagen, Nathan

    2014-12-01

    Flat field calibration of broadband imaging systems is widely used, and it has been said that users should try to make the spectrum of the flatfield calibration light source as close as possible to that of the measurement object. However, a quantitative analysis of the error induced by a mismatch of calibration and object spectra has been lacking. In order to develop this quantitative analysis, we provide a theoretical radiometric model for flatfield calibration and show how this spectral mismatching error arises. Simulations covering a variety of measurement scenarios indicate that spectral mismatching can create quantitative errors of up to a factor of 5 in situations that are regularly encountered by researchers performing quantitative work.

  3. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  4. 75 FR 63106 - Correction of Administrative Errors

    Science.gov (United States)

    2010-10-14

    ... (Agency) proposes to use a constructed share price for retired Lifecycle funds in order to make error... Price The Agency currently offers five Lifecycle funds: L Income, L 2010, L 2020, L 2030, and L 2040... retiring the L 2010 Fund, the Agency will transfer all money invested in the L 2010 Fund to the L Income...

  5. Detecting and Correcting Speech Rhythm Errors

    Science.gov (United States)

    Yurtbasi, Metin

    2015-01-01

    Every language has its own rhythm. Unlike many other languages in the world, English depends on the correct pronunciation of stressed and unstressed or weakened syllables recurring in the same phrase or sentence. Mastering the rhythm of English makes speaking more effective. Experiments have shown that we tend to hear speech as more rhythmical…

  6. Error Correction Techniques in the EFL Class

    OpenAIRE

    Zublin, Roxana

    2015-01-01

    Errors are regarded as a natural part of the learning process, with the teacher performing the role of facilitator, providing help when necessary and creating a supportive environment in which students can obtain a successful enhanced learning outcome. They are significant indicators of the learning progress showing what learners have attained and what remains to be acquired and provide the language teacher the necessary information about how to deal with the problems that may arise and give ...

  7. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  8. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  9. Correcting a Persistent Manhattan Project Statistical Error

    Science.gov (United States)

    Reed, Cameron

    2011-04-01

    In his 1987 autobiography, Major-General Kenneth Nichols, who served as the Manhattan Project's ``District Engineer'' under General Leslie Groves, related that when the Clinton Engineer Works at Oak Ridge, TN, was completed it was consuming nearly one-seventh (~ 14%) of the electric power being generated in the United States. This statement has been reiterated in several editions of a Department of Energy publication on the Manhattan Project. This remarkable claim has been checked against power generation and consumption figures available in Manhattan Engineer District documents, Tennessee Valley Authority records, and historical editions of the Statistical Abstract of the United States. The correct figure is closer to 0.9% of national generation. A speculation will be made as to the origin of Nichols' erroneous one-seventh figure.

  10. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    Science.gov (United States)

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  11. Three-Phase Text Error Correction Model for Korean SMS Messages

    Science.gov (United States)

    Byun, Jeunghyun; Park, So-Young; Lee, Seung-Wook; Rim, Hae-Chang

    In this paper, we propose a three-phase text error correction model consisting of a word spacing error correction phase, a syllable-based spelling error correction phase, and a word-based spelling error correction phase. In order to reduce the text error correction complexity, the proposed model corrects text errors step by step. With the aim of correcting word spacing errors, spelling errors, and mixed errors in SMS messages, the proposed model tries to separately manage the word spacing error correction phase and the spelling error correction phase. For the purpose of utilizing both the syllable-based approach covering various errors and the word-based approach correcting some specific errors accurately, the proposed model subdivides the spelling error correction phase into the syllable-based phase and the word-based phase. Experimental results show that the proposed model can improve the performance by solving the text error correction problem based on the divide-and-conquer strategy.

  12. Treelet Probabilities for HPSG Parsing and Error Correction

    NARCIS (Netherlands)

    Ivanova, Angelina; van Noord, Gerardus; Calzolari, Nicoletta; al, et

    2014-01-01

    Most state-of-the-art parsers take an approach to produce an analysis for any input despite errors. However, small grammatical mistakes in a sentence often cause parser to fail to build a correct syntactic tree. Applications that can identify and correct mistakes during parsing are particularly

  13. Correction of polarization error in scanned array weather radar antennas

    NARCIS (Netherlands)

    Pang, C.; Hoogeboom, P.; Russchenberg, H.; Wang, T.; Dong, J.; Wang, X.

    2014-01-01

    In this paper, the polarization error correction of dual-polarized planar scanned array weather radar in alternately transmitting and simultaneously receiving (ATSR) mode is analyzed. A method based on point correction and a method taking the complete array patterns into account are discussed. To

  14. Entanglement renormalization, quantum error correction, and bulk causality

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-04-07

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  15. Grammatical error correction using hybrid systems and type filtering

    OpenAIRE

    Felice, M; Yuan, Z; Andersen, ØE; Yannakoudakis, H; Kochmar, Ekaterina

    2014-01-01

    This paper describes our submission to the CoNLL 2014 shared task on grammatical error correction using a hybrid approach, which includes both a rule-based and an SMT system augmented by a large webbased language model. Furthermore, we demonstrate that correction type estimation can be used to remove unnecessary corrections, improving precision without harming recall. Our best hybrid system achieves state of-the-art results, ranking first on the original test set and second on the test set...

  16. New class of photonic quantum error correction codes

    Science.gov (United States)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  17. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  18. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  19. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  20. Detecting and correcting hard errors in a memory array

    Science.gov (United States)

    Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.

    2015-11-19

    Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.

  1. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  2. Small refractive errors--their correction and practical importance.

    Science.gov (United States)

    Skrbek, Matej; Petrová, Sylvie

    2013-04-01

    Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.

  3. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing for linea...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis......In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...

  4. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  5. Levenshtein error-correcting barcodes for multiplexed DNA sequencing.

    Science.gov (United States)

    Buschmann, Tilo; Bystrykh, Leonid V

    2013-09-11

    High-throughput sequencing technologies are improving in quality, capacity and costs, providing versatile applications in DNA and RNA research. For small genomes or fraction of larger genomes, DNA samples can be mixed and loaded together on the same sequencing track. This so-called multiplexing approach relies on a specific DNA tag or barcode that is attached to the sequencing or amplification primer and hence appears at the beginning of the sequence in every read. After sequencing, each sample read is identified on the basis of the respective barcode sequence.Alterations of DNA barcodes during synthesis, primer ligation, DNA amplification, or sequencing may lead to incorrect sample identification unless the error is revealed and corrected. This can be accomplished by implementing error correcting algorithms and codes. This barcoding strategy increases the total number of correctly identified samples, thus improving overall sequencing efficiency. Two popular sets of error-correcting codes are Hamming codes and Levenshtein codes. Levenshtein codes operate only on words of known length. Since a DNA sequence with an embedded barcode is essentially one continuous long word, application of the classical Levenshtein algorithm is problematic. In this paper we demonstrate the decreased error correction capability of Levenshtein codes in a DNA context and suggest an adaptation of Levenshtein codes that is proven of efficiently correcting nucleotide errors in DNA sequences. In our adaption we take the DNA context into account and redefine the word length whenever an insertion or deletion is revealed. In simulations we show the superior error correction capability of the new method compared to traditional Levenshtein and Hamming based codes in the presence of multiple errors. We present an adaptation of Levenshtein codes to DNA contexts capable of correction of a pre-defined number of insertion, deletion, and substitution mutations. Our improved method is additionally capable

  6. The role of error correction in communicative second language teaching

    Directory of Open Access Journals (Sweden)

    H. Ludolph Botha

    2013-02-01

    Full Text Available According to recent rese~rch, correction of errors in both oral and written communication does little to a~d language proficiency in the second language. In the Natural Approach of Krashen and Terrell the emphasis is on the acquisition of informal communication. Because the message and the understanding of the message remain of utmost importance, error correction is avoided. In Suggestopedia where the focus is also on communication, error correction is avoided as it inhibits the pupil. Onlangse navorsing het getoon dat die verbetering van foute in beide mondelinge en skriftelike kommunikasie min bydra tot beter taalvaardigheid in die tweede taal. In die Natural Approach van Krashen en Terrell val die klem op die verwerwing van informele kommunikasie, want die boodskap en die verstaan daarvan bly verreweg die belangrikste; die verbetering van foute word vermy. In Suggestopedagogiek, waar die klem ook op kommunikasie val, word die verbetering van foute vermy omdat dit die leerling beperk.

  7. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    OpenAIRE

    Chitra Jayathilake

    2013-01-01

    Error correction in ESL (English as a Second Language) classes has been a focal phenomenon in SLA (Second Language Acquisition) research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates,...

  8. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  9. Small Refractive Errors – Their Correction and Practical Importance

    OpenAIRE

    Skrbek, Matej; Petrová, Sylvie

    2013-01-01

    Small refractive errors present a group of specific far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren’t exhibited by loss of the visual acuity1. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acui...

  10. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  11. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  12. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Unde...... versions that are simple to compute. A simulation study shows that the finite-sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full......We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...

  13. Phase error correction in wavefront curvature sensing via phase retrieval

    DEFF Research Database (Denmark)

    Almoro, Percival; Hanson, Steen Grüner

    2008-01-01

    Wavefront curvature sensing with phase error correction system is carried out using phase retrieval based on a partially-developed volume speckle field. Various wavefronts are reconstructed: planar, spherical, cylindrical, and a wavefront passing through the side of a bare optical fiber. Spurious...

  14. The Mathematics of Error Correcting Quan tum Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 4. The Mathematics of Error Correcting Quantum Codes - Quantum Coding. K R Parthasarathy. General Article Volume 6 Issue 4 April 2001 pp 38-51. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Topological quantum error correction with optimal encoding rate

    International Nuclear Information System (INIS)

    Bombin, H.; Martin-Delgado, M. A.

    2006-01-01

    We prove the existence of topological quantum error correcting codes with encoding rates k/n asymptotically approaching the maximum possible value. Explicit constructions of these topological codes are presented using surfaces of arbitrary genus. We find a class of regular toric codes that are optimal. For physical implementations, we present planar topological codes

  16. Communication Systems Simulator with Error Correcting Codes Using MATLAB

    Science.gov (United States)

    Gomez, C.; Gonzalez, J. E.; Pardo, J. M.

    2003-01-01

    In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…

  17. Enhancing cryptographic primitives with techniques from error correcting codes

    NARCIS (Netherlands)

    Preneel, Bart; Dodunekov, Stefan; Rijmen, Vincent; Nikova, S.I.

    The NATO Advanced Research Workshop on Enhancing Cryptographic Primitives with Techniques from Error Correcting Codes has been organized in Veliko Tarnovo, Bulgaria, on October 6-9, 2008 by the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences in cooperation with COSIC,

  18. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  19. The Mathematics of Error Correcting Quan tum Codes

    Indian Academy of Sciences (India)

    The Mathematics of Error Correcting. Quan tum Codes. K R Parthasarathy is INSA. C V Raman Research. Professor at Indian. Statistical Institute, Delhi. His interests are quantum probability, mathematical foundations of quantum mechanics and probability theory. He is the author of two classic books in probability theory and ...

  20. ERROR CORRECTION, CO-INTEGRATION AND IMPORT DEMAND ...

    African Journals Online (AJOL)

    Abstract. The objective of this study is to determine empirically Import Demand equation in Nigeria using Error Correction and Cointegration techniques. All the variables employed in this study were found stationary at first difference using Augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) unit root test. Empirical ...

  1. A Cointegration And Error Correction Approach To Broad Money ...

    African Journals Online (AJOL)

    This study considered the stability of broad money demand function in Nigeria using data for 1970 to 2004. The study applied the Cointegration and error correction approach The Johansen Cointegration test shows that long run equilibrium relationship exists between broad money demand and its determinants. While the ...

  2. Improvement of Thai error correction system by memetic algorithm

    Directory of Open Access Journals (Sweden)

    Krit Somkantha

    2014-09-01

    Full Text Available This paper presents an efficient technique for improving the efficiency of Thai error correction system by using memetic algorithm. In this paper, the token passing algorithm is used for constructing word graph and the language model is used checking the correct sentence. The correction process starts with word graph construction from token passing algorithm, then the correct sentence are searched by memetic algorithm with the best fitness function from language model. For a long sentence from the token passing algorithm, a search space is very huge which can be resolved by using memetic algorithm. The memetic algorithm is used for searching the correct sentence in order to reduce the analysis time. The performance of the proposed method are evaluated and compared to the full search and genetic algorithm. From the experimental results show that the proposed method performs very well and yields better performance more than the compared method. The proposed method can search the best sentence accurately and quickly.

  3. On the synthesis of DNA error correcting codes.

    Science.gov (United States)

    Ashlock, Daniel; Houghten, Sheridan K; Brown, Joseph Alexander; Orth, John

    2012-10-01

    DNA error correcting codes over the edit metric consist of embeddable markers for sequencing projects that are tolerant of sequencing errors. When a genetic library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. This study compares different methods for synthesizing DNA error correcting codes. A new code-finding technique called the salmon algorithm is introduced and used to improve the size of best known codes in five difficult cases of the problem, including the most studied case: length six, distance three codes. An updated table of the best known code sizes with 36 improved values, resulting from three different algorithms, is presented. Mathematical background results for the problem from multiple sources are summarized. A discussion of practical details that arise in application, including biological design and decoding, is also given in this study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction

    Directory of Open Access Journals (Sweden)

    Chi-Huang Shih

    2012-01-01

    Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.

  5. Iranian EFL Teachers' and Learners' Perspectives of Oral Error Correction: Does the Timeline of Correction Matter?

    Science.gov (United States)

    Farahani, Ali Akbar; Salajegheh, Soory

    2015-01-01

    Although the provision of error correction is common in education, there are controversies regarding "when" correction is most effective and why it is effective. This study investigated the differences between Iranian English as a foreign language (EFL) teachers and learners regarding their perspectives towards the timeline of error…

  6. Effects of two error-correction procedures on oral reading errors. Word supply versus sentence repeat.

    Science.gov (United States)

    Singh, N N

    1990-04-01

    The effects of two error-correction procedures on oral reading errors and a control condition were compared in an alternating treatments design with three students who were moderately mentally retarded. The two procedures evaluated were word supply and sentence repeat. The teacher supplied the reader with the correct word immediately after each student error during the word-supply condition. During the sentence-repeat condition, the teacher supplied the correct word immediately after each student error, required the student to repeat the correct word, complete reading the sentence, and then reread the entire sentence. Both word-supply and sentence-repeat procedures were effective in reducing oral reading errors when compared to a no-intervention control condition, but sentence repeat was superior to word supply. In addition, a similar relationship was found between the two procedures when the students were tested for retention on the same reading passages a week later. These results show that sentence repeat is more effective than is the commonly used word-supply procedure in remediating the oral reading errors of students with moderate mental retardation.

  7. Position error correcting method and apparatus for industrial robot

    Energy Technology Data Exchange (ETDEWEB)

    Okada, T.; Mohri, S.

    1987-06-02

    A method is described of correcting a position error of an industrial robot. The method comprises: operating the industrial robot according to position command values, thereby moving a measurement point provided on the industrial robot to a first position; measuring the position values of the first position of the measurement point with a three-dimensional measuring unit to obtain three-dimensional coordinates defining the measurement point; computing a position error of the industrial robot by defining the coordinates of the measurement point in a first equation incorporating parameters of the robot contributing to the position error, forming partial differential equations from the first equation for each of the parameters contributing to the position error.

  8. Entanglement and Quantum Error Correction with Superconducting Qubits

    Science.gov (United States)

    Reed, Matthew

    2015-03-01

    Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.

  9. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  10. Spoken Language Understanding Software for Language Learning

    Directory of Open Access Journals (Sweden)

    Hassan Alam

    2008-04-01

    Full Text Available In this paper we describe a preliminary, work-in-progress Spoken Language Understanding Software (SLUS with tailored feedback options, which uses interactive spoken language interface to teach Iraqi Arabic and culture to second language learners. The SLUS analyzes input speech by the second language learner and grades for correct pronunciation in terms of supra-segmental and rudimentary segmental errors such as missing consonants. We evaluated this software on training data with the help of two native speakers, and found that the software recorded an accuracy of around 70% in law and order domain. For future work, we plan to develop similar systems for multiple languages.

  11. Thermalization, Error Correction, and Memory Lifetime for Ising Anyon Systems

    Directory of Open Access Journals (Sweden)

    Courtney G. Brell

    2014-09-01

    Full Text Available We consider two-dimensional lattice models that support Ising anyonic excitations and are coupled to a thermal bath. We propose a phenomenological model for the resulting short-time dynamics that includes pair creation, hopping, braiding, and fusion of anyons. By explicitly constructing topological quantum error-correcting codes for this class of system, we use our thermalization model to estimate the lifetime of the quantum information stored in the encoded spaces. To decode and correct errors in these codes, we adapt several existing topological decoders to the non-Abelian setting. We perform large-scale numerical simulations of these two-dimensional Ising anyon systems and find that the thresholds of these models range from 13% to 25%. To our knowledge, these are the first numerical threshold estimates for quantum codes without explicit additive structure.

  12. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  13. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  14. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  15. Fault-tolerant error correction with the gauge color code

    Science.gov (United States)

    Brown, Benjamin J.; Nickerson, Naomi H.; Browne, Dan E.

    2016-01-01

    The constituent parts of a quantum computer are inherently vulnerable to errors. To this end, we have developed quantum error-correcting codes to protect quantum information from noise. However, discovering codes that are capable of a universal set of computational operations with the minimal cost in quantum resources remains an important and ongoing challenge. One proposal of significant recent interest is the gauge color code. Notably, this code may offer a reduced resource cost over other well-studied fault-tolerant architectures by using a new method, known as gauge fixing, for performing the non-Clifford operations that are essential for universal quantum computation. Here we examine the gauge color code when it is subject to noise. Specifically, we make use of single-shot error correction to develop a simple decoding algorithm for the gauge color code, and we numerically analyse its performance. Remarkably, we find threshold error rates comparable to those of other leading proposals. Our results thus provide the first steps of a comparative study between the gauge color code and other promising computational architectures. PMID:27470619

  16. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  17. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George [Reno, NV

    2011-11-22

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  18. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  19. The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate

    Science.gov (United States)

    Polio, Charlene

    2012-01-01

    The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…

  20. Error Correction Techniques for the Foreign Language Classroom. Language in Education: Theory and Practice, No. 50.

    Science.gov (United States)

    Walz, Joel C.

    A review of literature on error correction shows a lack of agreement on the benefits of error correction in second language learning and confusion on which errors to correct and the approach to take to correction of both oral and written language. This monograph deals with these problems and provides examples of techniques in English, French,…

  1. Distance error correction for time-of-flight cameras

    Science.gov (United States)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  2. A new controller for the JET error field correction coils

    International Nuclear Information System (INIS)

    Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.

    2005-01-01

    This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios

  3. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  4. Quantum secret sharing based on quantum error-correcting codes

    International Nuclear Information System (INIS)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)

  5. PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    David Kaluge

    2017-03-01

    Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.

  6. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2013-01-01

    In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...... of the optical light path and the required amount of throughput going towards the destination node. The result is a dynamic FEC, which can be used to optimize the connections for throughput and/or energy efficiency, depending on the current demand....

  7. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  8. Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage

    Directory of Open Access Journals (Sweden)

    Juha Partala

    2017-01-01

    Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.

  9. Laser-error-correction control unit for machine tools

    Energy Technology Data Exchange (ETDEWEB)

    Burleson, R.R.

    1978-05-23

    An ultraprecision machining capability is needed for the laser fusion program. For this work, a precision air-bearing spindle has been mounted horizontally on a modified vertical column of a Moore Number 3 measuring machine base located in a development laboratory at the Oak Ridge Y-12 Plant. An open-loop control system previously installed on this machine was inadequate to meet the upcoming requirements since accuracy is limited to 0.5 ..mu..m by the errors in the machine's gears and leadscrew. A new controller was needed that could monitor the actual position of the machine and perform real-time error correction on the programmed tool path. It was necessary that this project: (1) attain an optimum tradeoff between hardware and software; (2) use a modular design for easy maintenance; (3) use a standard NC tape service; (4) drive the x and y axes with a positioning resolution of 5.08 nm and a feedback resolution of 10 nm; (5) drive the x and y axis motors at a velocity of 0.05 cm/sec in the contouring mode and 0.18 cm/sec in the positioning mode; (6) eliminate the possibility of tape-reader errors; and (7) allow editing of the part description data. The work that was done to develop and install the new machine controller is described.

  10. Topics in quantum cryptography, quantum error correction, and channel simulation

    Science.gov (United States)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel

  11. A Secure RFID Authentication Protocol Adopting Error Correction Code

    Directory of Open Access Journals (Sweden)

    Chien-Ming Chen

    2014-01-01

    Full Text Available RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance.

  12. A secure RFID authentication protocol adopting error correction code.

    Science.gov (United States)

    Chen, Chien-Ming; Chen, Shuai-Min; Zheng, Xinying; Chen, Pei-Yu; Sun, Hung-Min

    2014-01-01

    RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance.

  13. Detecting Positioning Errors and Estimating Correct Positions by Moving Window

    Science.gov (United States)

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282

  14. Hepatitis B Virus Capsid Completion Occurs through Error Correction.

    Science.gov (United States)

    Lutomski, Corinne A; Lyktey, Nicholas A; Zhao, Zhongchao; Pierson, Elizabeth E; Zlotnick, Adam; Jarrold, Martin F

    2017-11-22

    Understanding capsid assembly is important because of its role in virus lifecycles and in applications to drug discovery and nanomaterial development. Many virus capsids are icosahedral, and assembly is thought to occur by the sequential addition of capsid protein subunits to a nucleus, with the final step completing the icosahedron. Almost nothing is known about the final (completion) step because the techniques usually used to study capsid assembly lack the resolution. In this work, charge detection mass spectrometry (CDMS) has been used to track the assembly of the T = 4 hepatitis B virus (HBV) capsid in real time. The initial assembly reaction occurs rapidly, on the time scale expected from low resolution measurements. However, CDMS shows that many of the particles generated in this process are defective and overgrown, containing more than the 120 capsid protein dimers needed to form a perfect T = 4 icosahedron. The defective and overgrown capsids self-correct over time to the mass expected for a perfect T = 4 capsid. Thus, completion is a distinct phase in the assembly reaction. Capsid completion does not necessarily occur by inserting the last building block into an incomplete, but otherwise perfect icosahedron. The initial assembly reaction can be predominently imperfect, and completion involves the slow correction of the accumulated errors.

  15. THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ketut Santi Indriani

    2015-05-01

    Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.

  16. The Effect of Error Correction vs. Error Detection on Iranian Pre-Intermediate EFL Learners' Writing Achievement

    Science.gov (United States)

    Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad

    2010-01-01

    This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…

  17. Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment: The Case of Reading and Writing English for Academic Purposes II

    OpenAIRE

    Chaiwat Tantarangsee

    2016-01-01

    The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the cours...

  18. HOO 2012 Error Recognition and Correction Shared Task: Cambridge University Submission Report

    OpenAIRE

    Kochmar, Ekaterina; Andersen, Oeistein Edvin; Briscoe, Edward John

    2012-01-01

    Previous work on automated error recognition and correction of texts written by learners of English as a Second Language has demonstrated experimentally that training classifiers on error-annotated ESL text generally outperforms training on native text alone and that adaptation of error correction models to the native language (L1) of the writer improves performance. Nevertheless, most extant models have poor precision, particularly when attempting error correction, and this limits their usef...

  19. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  20. How EFL Students Can Use Google to Correct Their "Untreatable" Written Errors

    Science.gov (United States)

    Geiller, Luc

    2014-01-01

    This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several "untreatable" written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback…

  1. Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them

    Science.gov (United States)

    Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.

    2011-01-01

    Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…

  2. Zum Problem der muendlichen Fehlerkorrektur (On the Problem of Oral Correction of Errors)

    Science.gov (United States)

    Wullen, T. Lothar

    1975-01-01

    Discrimination among errors is based on the degree of hindrance to understanding. The importance of error correction is emphasized, as is promptness in correction, with many students participating. Various possibilities for correction by students and teacher are presented. (Text is in German.) (IFS/WGA)

  3. Performance Errors in Weight Training and Their Correction.

    Science.gov (United States)

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  4. Energy efficiency of error correcting mechanisms for wireless communications

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal

  5. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  6. Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

    Directory of Open Access Journals (Sweden)

    Patrick SAINT-DIZIER

    2015-12-01

    Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

  7. Correction of Cadastral Error: Either the Right or Obligation of the Person Concerned?

    Directory of Open Access Journals (Sweden)

    Magdenko A. Y.

    2014-07-01

    Full Text Available The article is devoted to the institute of cadastral error. Some questions and problems of cadastral error corrections are considered. The material is based on current legislation and judicial practice.

  8. Reed-Solomon error-correction as a software patch mechanism.

    Energy Technology Data Exchange (ETDEWEB)

    Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-11-01

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  9. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  10. Correcting Text Production Errors: Isolating the Effects of Writing Mode from Error Span, Input Mode, and Lexicality

    Science.gov (United States)

    Leijten, Marielle; Van Waes, Luuk; Ransdell, Sarah

    2010-01-01

    Error analysis involves detecting, diagnosing, and correcting discrepancies between the text produced so far (TPSF) and the writers mental representation of what the text should be. The use of different writing modes, like keyboard-based word processing and speech recognition, causes different type of errors during text production. While many…

  11. ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES

    OpenAIRE

    Maria Corazon Saturnina A Castro

    2017-01-01

    Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices.  This paper poses the major problem: ...

  12. The Effectiveness of Implicit and Explicit Error Correction on Learners' Performance

    Science.gov (United States)

    Varnosfadrani, Azizollah Dabaghi; Basturkmen, Helen

    2009-01-01

    The study looked at the effects of correction of learners' errors on learning of grammatical features. In particular, the manner of correction (explicit vs. implicit correction) was investigated. The study also focussed on the effectiveness of explicit and implicit correction of developmental early vs. developmental late features. Fifty-six…

  13. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  14. Extending Lifetime of Wireless Sensor Networks using Forward Error Correction

    DEFF Research Database (Denmark)

    Donapudi, S U; Obel, C O; Madsen, Jan

    2006-01-01

    Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...

  15. Oral Reading Error Correction Behavior and Cloze Performance.

    Science.gov (United States)

    Page, William D.

    1979-01-01

    Describes a study that assessed how correction behavior in the oral reading of 48 elementary school students related to comprehension, as measured by cloze performance. Indicates that new measures of reading comprehension are needed, as correction behavior acts as an indicator of comprehension. (TJ)

  16. Realization of three-qubit quantum error correction with superconducting circuits.

    Science.gov (United States)

    Reed, M D; DiCarlo, L; Nigg, S E; Sun, L; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2012-02-01

    Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome--a quantum state indicating which error has occurred--by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.

  17. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    Science.gov (United States)

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  18. The Effect of Error Correction on L2 Grammar Knowledge and Oral Proficiency.

    Science.gov (United States)

    Dekeyser, Robert M.

    1993-01-01

    The efficiency of oral error correction was investigated as a function of 35 Dutch-speaking high school seniors' individual characteristics of aptitude, motivation, anxiety, and previous achievement. Results were mixed but generally suggest that error correction does not lead to across-the-board improvement of achievement. (Contains 55…

  19. Students' Preferences and Attitude toward Oral Error Correction Techniques at Yanbu University College, Saudi Arabia

    Science.gov (United States)

    Alamri, Bushra; Fawzi, Hala Hassan

    2016-01-01

    Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…

  20. Antecedent Control of Oral Reading Errors and Self-Corrections by Mentally Retarded Children.

    Science.gov (United States)

    Singh, Nirbhay N.; Singh, Judy

    1984-01-01

    The study evaluated effects of manipulating two antecedent stimulus events with respect to oral reading errors and self-corrections of four mentally retarded adolescents. Oral reading errors decreased and self-corrections increased when the children previewed the target text with their teacher before reading it orally. (Author/CL)

  1. Supporting Dictation Speech Recognition Error Correction: The Impact of External Information

    Science.gov (United States)

    Shi, Yongmei; Zhou, Lina

    2011-01-01

    Although speech recognition technology has made remarkable progress, its wide adoption is still restricted by notable effort made and frustration experienced by users while correcting speech recognition errors. One of the promising ways to improve error correction is by providing user support. Although support mechanisms have been proposed for…

  2. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  3. Continuous-variable quantum error correction I: code comparison

    Science.gov (United States)

    Albert, Victor V.; Duivenvoorden, Kasper; Noh, Kyungjoo; Brierley, R. T.; Reinhold, Philip; Li, Linshu; Shen, Chao; Schoelkopf, R. J.; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    There are currently four types of non-trivial encodings of quantum information in a single bosonic mode: cat, binomial, and numerically optimized codes are designed to protect against bosonic loss errors, while GKP codes are designed to protect against bosonic displacement errors. These four code types have yet to be compared using the same error model. We report on a numerical comparison of the entanglement fidelity of all codes with respect to the lossy bosonic channel, given an average occupation number constraint and the optimal recovery operation. GKP codes demonstrate the highest fidelities for all but the smallest values of the boson loss probability (the parameter which quantifies the strength of amplitude damping). Although designed to protect against small displacement noise, GKP codes can offer a high degree of protection against bosonic loss errors. We also examine the performance of the four code types with respect to the combination of amplitude damping and a strong Kerr non-linearity.

  4. Strategies for Detecting and Correcting Errors in Accounting Problems.

    Science.gov (United States)

    James, Marianne L.

    2003-01-01

    Reviews common errors in accounting tests that students commit resulting from deficiencies in fundamental prior knowledge, ineffective test taking, and inattention to detail and provides solutions to the problems. (JOW)

  5. Continuous-variable quantum error correction II: the Gottesman-Kitaev-Preskill code

    Science.gov (United States)

    Noh, Kyungjoo; Duivenvoorden, Kasper; Albert, Victor V.; Brierley, R. T.; Reinhold, Philip; Li, Linshu; Shen, Chao; Schoelkopf, R. J.; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang

    Recently, various single mode bosonic quantum error-correcting codes (e.g., cat codes and binomial codes) have been developed to correct errors due to excitation loss of bosonic systems. Meanwhile, the Gottesman-Kitaev-Preskill (GKP) codes do not follow the simple design guidelines of cat and binomial codes, but nevertheless demonstrate excellent performance in correcting bosonic loss errors. To understand the underlying mechanism of the GKP codes, we represent them using a superposition of coherent states, investigate their performance as approximate error-correcting codes, and identify the dominant types of uncorrectable errors. This understanding will help us to develop more robust codes against bosonic loss errors, which will be useful for robust quantum information processing with bosonic systems.

  6. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  7. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  8. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...... patterns to the number of distinct syndromes....

  9. Spoken Lebanese.

    Science.gov (United States)

    Feghali, Maksoud N.

    This book teaches the Arabic Lebanese dialect through topics such as food, clothing, transportation, and leisure activities. It also provides background material on the Arab World in general and the region where Lebanese Arabic is spoken or understood--Lebanon, Syria, Jordan, Palestine--in particular. This language guide is based on the phonetic…

  10. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  11. The role of extensive recasts in error detection and correction by adult ESL students

    Directory of Open Access Journals (Sweden)

    Laura Hawkes

    2016-03-01

    Full Text Available Most of the laboratory studies on recasts have examined the role of intensive recasts provided repeatedly on the same target structure. This is different from the original definition of recasts as the reformulation of learner errors as they occur naturally and spontaneously in the course of communicative interaction. Using a within-group research design and a new testing methodology (video-based stimulated correction posttest, this laboratory study examined whether extensive and spontaneous recasts provided during small-group work were beneficial to adult L2 learners. Participants were 26 ESL learners, who were divided into seven small groups (3-5 students per group, and each group participated in an oral activity with a teacher. During the activity, the students received incidental and extensive recasts to half of their errors; the other half of their errors received no feedback. Students’ ability to detect and correct their errors in the three types of episodes was assessed using two types of tests: a stimulated correction test (a video-based computer test and a written test. Students’ reaction time on the error detection portion of the stimulated correction task was also measured. The results showed that students were able to detect more errors in error+recast (error followed by the provision of a recast episodes than in error-recast (error and no recast provided episodes (though this difference did not reach statistical significance. They were also able to successfully and partially successfully correct more errors in error+recast episodes than in error-recast episodes, and this difference was statistically significant on the written test. The reaction time results also point towards a benefit from recasts, as students were able to complete the task (slightly more quickly for error+recast episodes than for error-recast episodes.

  12. DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION

    Directory of Open Access Journals (Sweden)

    E. B. Romanova

    2016-03-01

    Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different

  13. Incident reports--correcting processes and reducing errors.

    Science.gov (United States)

    Dunn, Debra

    2003-08-01

    Although it may be human nature to make mistakes, it also is human nature to create solutions, identify alternatives, and meet future challenges. This article describes systems approaches to assessing the ways in which an organization operates and explains the types of failures that cause errors. The steps that guide managers in adapting an incident reporting system that incorporates continuous quality improvement are identified.

  14. ACE: accurate correction of errors using K-mer tries

    NARCIS (Netherlands)

    Sheikhizadeh Anari, S.; Ridder, de D.

    2015-01-01

    The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to

  15. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    Science.gov (United States)

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  16. error correction in a communicative language teaching framework

    African Journals Online (AJOL)

    The problem that I would like to address in this paper, is, I am convinced, experienced by all second language teacher trainers and teachers. In presenting the Communicative Approach and the theory on which it is founded, the data firmly guide them towards the conclusion that error is not the bAte noire of language.

  17. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    Science.gov (United States)

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  18. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  19. Using ridge regression in systematic pointing error corrections

    Science.gov (United States)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  20. New laws of practice for learning and error correction

    International Nuclear Information System (INIS)

    Duffey, R.B.

    2008-01-01

    Relevant to design, operation and safety is the determination of risk and error rates. We provide the detailed comparison of our new learning and statistical theories for system outcome data with the traditional analysis of the learning curves obtained from tests with individual human subjects. The results provide a consistent predictive basis for the learning trends emerging all the way from timescales of many years in large technological system outcomes to actions that occur in about a tenth of a second for individual human decisions. Hence, we demonstrate both the common influence of the human element and the importance of statistical reasoning and analysis. (author)

  1. 5 CFR 894.105 - Who may correct an error in my enrollment?

    Science.gov (United States)

    2010-01-01

    ... correction of an administrative error if it receives evidence that it would be against equity (fairness) and... periods of the retroactive coverage. These premiums will not be on a pre-tax basis (they are not subject...

  2. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  3. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  4. Using Oral Error Correction in Storytelling to Improve Students' Speaking Achievement

    OpenAIRE

    Sumantri, Arya Yoga Swara; Sudirman, Sudirman; Supriyadi, Deddy

    2015-01-01

    Tujuan penelitian ini adalah untuk menemukan perbedaan signifikan pada prestasi berbicara siswa setelah diajarkan dengan menggunakan teknik oral error correction, menemukan apakah oral error correction dapat meningkatkan kemampuan berbicara siswa pada aspek kosa kata, kelancaran, pemahaman, pelafalan, dan tata bahasa, serta proses belajar mengajar. Penelitian ini menggunakan metode kuantitatif. Sampel dipilih secara khusus berdasarkan tingginya nilai bahasa inggris yaitu kelas XI IPA1 di SMAN...

  5. Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring

    Energy Technology Data Exchange (ETDEWEB)

    Bunch, S.C.; Holmes, J.

    2004-01-01

    We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.

  6. IDENTIFICATION AND CORRECTION OF COORDINATE MEASURING MACHINE GEOMETRICAL ERRORS USING LASERTRACER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Adam Gąska

    2013-12-01

    Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.

  7. High-speed parallel forward error correction for optical transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology.......This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....

  8. Efficient error correction for next-generation sequencing of viral amplicons

    Directory of Open Access Journals (Sweden)

    Skums Pavel

    2012-06-01

    Full Text Available Abstract Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i k-mer-based error correction (KEC and (ii empirical frequency threshold (ET. Both were compared to a previously published clustering algorithm (SHORAH, in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm

  9. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  10. CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD

    Directory of Open Access Journals (Sweden)

    BUSUIOCEANU STELIANA

    2013-08-01

    Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.

  11. Correcting Students' Written Grammatical Errors: The Effects of Negotiated versus Nonnegotiated Feedback

    Science.gov (United States)

    Nassaji, Hossein

    2011-01-01

    A substantial number of studies have examined the effects of grammar correction on second language (L2) written errors. However, most of the existing research has involved unidirectional written feedback. This classroom-based study examined the effects of oral negotiation in addressing L2 written errors. Data were collected in two intermediate…

  12. Error-tolerant Finite State Recognition with Applications to Morphological Analysis and Spelling Correction

    OpenAIRE

    Oflazer, Kemal

    1995-01-01

    Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition...

  13. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  14. [Errors Analysis and Correction in Atmospheric Methane Retrieval Based on Greenhouse Gases Observing Satellite Data].

    Science.gov (United States)

    Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua

    2016-01-01

    High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method.

  15. Preferences of ELT Learners in the Correction of Oral Vocabulary and Pronunciation Errors

    Science.gov (United States)

    Ustaci, Hale Yayla; Ok, Selami

    2014-01-01

    Vocabulary is an essential component of language teaching and learning process, and correct pronunciation of lexical items is an ultimate goal for language instructors in ELT programs. Apart from how lexical items should be taught, the way teachers correct oral vocabulary errors as well as those of pronunciation in line with the preferences of…

  16. Did I say dog or cat? A study of semantic error detection and correction in children.

    Science.gov (United States)

    Hanley, J Richard; Cortis, Cathleen; Budd, Mary-Jane; Nozari, Nazbanou

    2016-02-01

    Although naturalistic studies of spontaneous speech suggest that young children can monitor their speech, the mechanisms for detection and correction of speech errors in children are not well understood. In particular, there is little research on monitoring semantic errors in this population. This study provides a systematic investigation of detection and correction of semantic errors in children between the ages of 5 and 8years as they produced sentences to describe simple visual events involving nine highly familiar animals (the moving animals task). Results showed that older children made fewer errors and corrected a larger proportion of the errors that they made than younger children. We then tested the prediction of a production-based account of error monitoring that the strength of the language production system, and specifically its semantic-lexical component, should be correlated with the ability to detect and repair semantic errors. Strength of semantic-lexical mapping, as well as lexical-phonological mapping, was estimated individually for children by fitting their error patterns, obtained from an independent picture-naming task, to a computational model of language production. Children's picture-naming performance was predictive of their ability to monitor their semantic errors above and beyond age. This relationship was specific to the strength of the semantic-lexical part of the system, as predicted by the production-based monitor. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Investigation of phase error correction for digital sinusoidal phase-shifting fringe projection profilometry

    Science.gov (United States)

    Ma, S.; Quan, C.; Zhu, R.; Tay, C. J.

    2012-08-01

    Digital sinusoidal phase-shifting fringe projection profilometry (DSPFPP) is a powerful tool to reconstruct three-dimensional (3D) surface of diffuse objects. However, a highly accurate profile is often hindered by nonlinear response, color crosstalk and imbalance of a pair of digital projector and CCD/CMOS camera. In this paper, several phase error correction methods, such as Look-Up-Table (LUT) compensation, intensity correction, gamma correction, LUT-based hybrid method and blind phase error suppression for gray and color-encoded DSPFPP are described. Experimental results are also demonstrated to evaluate the effectiveness of each method.

  18. Correction of errors in tandem mass spectrum extraction enhances phosphopeptide identification.

    Science.gov (United States)

    Hao, Piliang; Ren, Yan; Tam, James P; Sze, Siu Kwan

    2013-12-06

    The tandem mass spectrum extraction of phosphopeptides is more difficult and error-prone than that of unmodified peptides due to their lower abundance, lower ionization efficiency, the cofragmentation with other high-abundance peptides, and the use of MS(3) on MS(2) fragments with neutral losses. However, there are still no established methods to evaluate its correctness. Here we propose to identify and correct these errors via the combinatorial use of multiple spectrum extraction tools. We evaluated five free and two commercial extraction tools using Mascot and phosphoproteomics raw data from LTQ FT Ultra, in which RawXtract 1.9.9.2 identified the highest number of unique phosphopeptides (peptide expectation value exporting MS/MS fragments. We then corrected the errors by selecting the best extracted MGF file for each spectrum among the three tools for another database search. With the errors corrected, it results in the 22.4 and 12.2% increase in spectrum matches and unique peptide identification, respectively, compared with the best single method. Correction of errors in spectrum extraction improves both the sensitivity and confidence of phosphopeptide identification. Data analysis on nonphosphopeptide spectra indicates that this strategy applies to unmodified peptides as well. The identification of errors in spectrum extraction will promote the improvement of spectrum extraction tools in future.

  19. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  20. Local concurrent error detection and correction in data structures using virtual backpointers

    Science.gov (United States)

    Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent

    1989-01-01

    A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.

  1. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  2. Forward error correction supported 150 Gbit/s error-free wavelength conversion based on cross phase modulation in silicon

    DEFF Research Database (Denmark)

    Hu, Hao; Andersen, Jakob Dahl; Rasmussen, Anders

    2013-01-01

    We build a forward error correction (FEC) module and implement it in an optical signal processing experiment. The experiment consists of two cascaded nonlinear optical signal processes, 160 Gbit/s all optical wavelength conversion based on the cross phase modulation (XPM) in a silicon nanowire...... and subsequent 160 Gbit/s-to-10 Gbit/s demultiplexing in a highly nonlinear fiber (HNLF). The XPM based all optical wavelength conversion in silicon is achieved by off-center filtering the red shifted sideband on the CW probe. We thoroughly demonstrate and verify that the FEC code operates correctly after...... the optical signal processing, yielding truly error-free 150 Gbit/s (excl. overhead) optically signal processed data after the two cascaded nonlinear processes. © 2013 Optical Society of America....

  3. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    Directory of Open Access Journals (Sweden)

    Marios H. Michael

    2016-07-01

    Full Text Available We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These “binomial quantum codes” are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to “cat codes” based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  4. New Class of Quantum Error-Correcting Codes for a Bosonic Mode

    Science.gov (United States)

    Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.

    2016-07-01

    We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.

  5. Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging.

    Science.gov (United States)

    Li, Bo; Liu, Falin; Zhou, Chongbin; Lv, Yuanhao; Hu, Jingqiu

    2017-03-17

    Defocus of the reconstructed image of synthetic aperture radar (SAR) occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS) radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D) phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM) waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D) phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.

  6. Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging

    Directory of Open Access Journals (Sweden)

    Bo Li

    2017-03-01

    Full Text Available Defocus of the reconstructed image of synthetic aperture radar (SAR occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.

  7. G W self-screening error and its correction using a local density functional

    Science.gov (United States)

    Wetherell, J.; Hodgson, M. J. P.; Godby, R. W.

    2018-03-01

    The self-screening error in electronic structure theory is the part of the self-interaction error that would remain within the G W approximation if the exact dynamically screened Coulomb interaction W were used, causing each electron to artificially screen its own presence. This introduces error into the electron density and ionization potential. We propose a simple, computationally efficient correction to G W calculations in the form of a local density functional, obtained using a series of finite training systems; in tests, this eliminates the self-screening errors in the electron density and ionization potential.

  8. Errors in preparation and administration of parenteral drugs in neonatology: evaluation and corrective actions.

    Science.gov (United States)

    Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra

    2016-12-01

    The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.

  9. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    Science.gov (United States)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  10. Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors

    Directory of Open Access Journals (Sweden)

    Pham Thuy Dung

    2016-12-01

    Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners

  11. SimCommSys: taking the errors out of error-correcting code simulations

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.

  12. Corrective Feedback (CF) and English-Major EFL Learners' Ability in Grammatical Error Detection and Correction

    Science.gov (United States)

    Asassfeh, Sahail M.

    2013-01-01

    Corrective feedback (CF), the implicit or explicit information learners receive indicating a gap between their current, compared to the desired, performance, has been an area of interest for EFL researchers during the last few decades. This study, conducted on 139 English-major prospective EFL teachers, assessed the impact of two CF types…

  13. [Influence of measurement errors of radiation in NIR bands on water atmospheric correction].

    Science.gov (United States)

    Xu, Hua; Li, Zheng-Qiang; Yin, Qiu; Gu, Xing-Fa

    2013-07-01

    For standard algorithm of atmospheric correction of water, the ratio of two near-infrared (NIR) channels is selected to determine an aerosol model, and then aerosol radiation at every wavelength is accordingly estimated by extrapolation. The uncertainty of radiation measurement in NIR bands will play important part in the accuracy of water-leaving reflectance. In the present research, erroneous expressions were derived mathematically in order to see the error propagation from NIR bands. The errors distribution of water-leaving reflectance was thoroughly studied. The results show that the bigger the errors of measurement are made, the bigger the errors of water-leaving reflectance are retrieved, with sometimes the NIR band errors canceling out. Moreover, the higher the values of aerosol optical depth or the more the component of small particles in aerosol, the bigger the errors that appear during retrieval.

  14. Structured methods for identifying and correcting potential human errors in aviation operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risks of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).

  15. Sandwich corrected standard errors in family-based genome-wide association studies.

    Science.gov (United States)

    Minică, Camelia C; Dolan, Conor V; Kampert, Maarten M D; Boomsma, Dorret I; Vink, Jacqueline M

    2015-03-01

    Given the availability of genotype and phenotype data collected in family members, the question arises which estimator ensures the most optimal use of such data in genome-wide scans. Using simulations, we compared the Unweighted Least Squares (ULS) and Maximum Likelihood (ML) procedures. The former is implemented in Plink and uses a sandwich correction to correct the standard errors for model misspecification of ignoring the clustering. The latter is implemented by fast linear mixed procedures and models explicitly the familial resemblance. However, as it commits to a background model limited to additive genetic and unshared environmental effects, it employs a misspecified model for traits with a shared environmental component. We considered the performance of the two procedures in terms of type I and type II error rates, with correct and incorrect model specification in ML. For traits characterized by moderate to large familial resemblance, using an ML procedure with a correctly specified model for the conditional familial covariance matrix should be the strategy of choice. The potential loss in power encountered by the sandwich corrected ULS procedure does not outweigh its computational convenience. Furthermore, the ML procedure was quite robust under model misspecification in the simulated settings and appreciably more powerful than the sandwich corrected ULS procedure. However, to correct for the effects of model misspecification in ML in circumstances other than those considered here, we propose to use a sandwich correction. We show that the sandwich correction can be formulated in terms of the fast ML method.

  16. ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES

    Directory of Open Access Journals (Sweden)

    Maria Corazon Saturnina A Castro

    2017-10-01

    Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices.  This paper poses the major problem:  How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed.  Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.

  17. Biometrics encryption combining palmprint with two-layer error correction codes

    Science.gov (United States)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  18. Is a Genome a Codeword of an Error-Correcting Code?

    Science.gov (United States)

    Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo

    2012-01-01

    Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495

  19. Full-Diversity Space-Time Error Correcting Codes with Low-Complexity Receivers

    Directory of Open Access Journals (Sweden)

    Hassan MohamadSayed

    2011-01-01

    Full Text Available We propose an explicit construction of full-diversity space-time block codes, under the constraint of an error correction capability. Furthermore, these codes are constructed in order to be suitable for a serial concatenation with an outer linear forward error correcting (FEC code. We apply the binary rank criterion, and we use the threaded layering technique and an inner linear FEC code to define a space-time error-correcting code. When serially concatenated with an outer linear FEC code, a product code can be built at the receiver, and adapted iterative receiver structures can be applied. An optimized hybrid structure mixing MMSE turbo equalization and turbo product code decoding is proposed. It yields reduced complexity and enhanced performance compared to previous existing structures.

  20. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  1. Correcting students’ written grammatical errors: The effects of negotiated versus nonnegotiated feedback

    Directory of Open Access Journals (Sweden)

    Hossein Nassaji

    2011-10-01

    Full Text Available A substantial number of studies have examined the effects of grammar correction on second language (L2 written errors. However, most of the existing research has involved unidirectional written feedback. This classroom-based study examined the effects of oral negotiation in addressing L2 written errors. Data were collected in two intermediate adult English as a second language classes. Three types of feedback were compared: nonnegotiated direct reformulation, feedback with limited negotiation (i.e., prompt + reformulation and feedback with negotiation. The linguistic targets chosen were the two most common grammatical errors in English: articles and prepositions. The effects of feedback were measured by means of learner-specific error identification/correction tasks administered three days, and again ten days, after the treatment. The results showed an overall advantage for feedback that involved negotiation. However, a comparison of data per error types showed that the differential effects of feedback types were mainly apparent for article errors rather than preposition errors. These results suggest that while negotiated feedback may play an important role in addressing L2 written errors, the degree of its effects may differ for different linguistic targets.

  2. A Case for Soft Error Detection and Correction in Computational Chemistry.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  3. Extending the lifetime of a quantum bit with error correction in superconducting circuits

    Science.gov (United States)

    Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.

    2016-08-01

    Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.

  4. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  5. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  6. Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads

    OpenAIRE

    Song, Li; Florea, Liliana

    2015-01-01

    Background Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. Findings We developed a k-m...

  7. Short-term wind power combined forecasting based on error forecast correction

    International Nuclear Information System (INIS)

    Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng

    2016-01-01

    Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed

  8. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  9. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

     This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  10. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...

  11. Government spending in education and economic growth in Cameroon:a Vector error Correction Model approach

    OpenAIRE

    Douanla Tayo, Lionel; Abomo Fouda, Marcel Olivier

    2015-01-01

    This study aims at assessing the effect of government spending in education on economic growth in Cameroon over the period 1980-2012 using a vector error correction model. The estimated results show that these expenditures had a significant and positive impact on economic growth both in short and long run. The estimated error correction model shows that an increase of 1% of the growth rate of private gross fixed capital formation and government education spending led to increases of 5.03% a...

  12. Environment-assisted error correction of single-qubit phase damping

    International Nuclear Information System (INIS)

    Trendelkamp-Schroer, Benjamin; Helm, Julius; Strunz, Walter T.

    2011-01-01

    Open quantum system dynamics of random unitary type may in principle be fully undone. Closely following the scheme of environment-assisted error correction proposed by Gregoratti and Werner [J. Mod. Opt. 50, 915 (2003)], we explicitly carry out all steps needed to invert a phase-damping error on a single qubit. Furthermore, we extend the scheme to a mixed-state environment. Surprisingly, we find cases for which the uncorrected state is closer to the desired state than any of the corrected ones.

  13. Impact of an electronic medical record on the incidence of antiretroviral prescription errors and HIV pharmacist reconciliation on error correction among hospitalized HIV-infected patients.

    Science.gov (United States)

    Batra, Rishi; Wolbach-Lowes, Jane; Swindells, Susan; Scarsi, Kimberly K; Podany, Anthony T; Sayles, Harlan; Sandkovsky, Uriel

    2015-01-01

    Previous review of admissions from 2009-2011 in our institution found a 35.1% error rate in antiretroviral (ART) prescribing, with 55% of errors never corrected. Subsequently, our institution implemented a unified electronic medical record (EMR) and we developed a medication reconciliation process with an HIV pharmacist. We report the impact of the EMR on incidence of errors and of the pharmacist intervention on time to error correction. Prospective medical record review of HIV-infected patients hospitalized for >24 h between 9 March 2013 and 10 March 2014. An HIV pharmacist reconciled outpatient ART prescriptions with inpatient orders within 24 h of admission. Prescribing errors were classified and time to error correction recorded. Error rates and time to correction were compared to historical data using relative risks (RR) and logistic regression models. 43 medication errors were identified in 31/186 admissions (16.7%). The incidence of errors decreased significantly after EMR (RR 0.47, 95% CI 0.34, 0.67). Logistic regression adjusting for gender and race/ethnicity found that errors were 61% less likely to occur using the EMR (95% CI 40%, 75%; Perrors were corrected, 65% within 24 h and 81.4% within 48 h. Compared to historical data where only 31% of errors were corrected in errors were 9.4× more likely to be corrected within 24 h with HIV pharmacist intervention (Perror rate by more than 50% but despite this, ART errors remained common. HIV pharmacist intervention was key to timely error correction.

  14. A power supply error correction method for single-ended digital audio class D amplifiers

    Science.gov (United States)

    Yu, Zeqi; Wang, Fengqin; Fan, Yangyu

    2016-12-01

    In single-ended digital audio class D amplifiers (CDAs), the errors caused by power supply noise in the power stages degrade the output performance seriously. In this article, a novel power supply error correction method is proposed. This method introduces the power supply noise of the power stage into the digital signal processing block and builds a power supply error corrector between the interpolation filter and the uniform-sampling pulse width modulation (UPWM) lineariser to pre-correct the power supply error in the single-ended digital audio CDA. The theoretical analysis and implementation of the method are also presented. To verify the effectiveness of the method, a two-channel single-ended digital audio CDA with different power supply error correction methods is designed, simulated, implemented and tested. The simulation and test results obtained show that the method can greatly reduce the error caused by the power supply noise with low hardware cost, and that the CDA with the proposed method can achieve a total harmonic distortion + noise (THD + N) of 0.058% for a -3 dBFS, 1 kHz input when a 55 V linear unregulated direct current (DC) power supply (with the -51 dBFS, 100 Hz power supply noise) is used in the power stages.

  15. A Study of Students’ and Teachers’ Preferences and Attitudes towards Correction of Classroom Written Errors in Iranian EFL Context

    Directory of Open Access Journals (Sweden)

    Leila Hajian

    2014-09-01

    Full Text Available Written error correction may be the most widely used method for responding to student writing. Although there are various studies investigating error correction, there are little researches considering teachers’ and students’ preferences towards written error correction. The present study investigates students’ and teachers’ preferences and attitudes towards correction of classroom written errors in Iranian EFL context by using questionnaire. In this study, 80 students and 12 teachers were asked to answer the questionnaire. Then data were collected and analyzed by descriptive method. The findings from teachers and students show positive attitudes towards written error correction. Although the results of this study demonstrate teachers and students have some common preferences related to written error correction, there are some important discrepancies. For example; students prefer all error be corrected, but teachers prefer selecting some. However, students prefer teachers’ correction rather than peer or self-correction. This study considers a number of difficulties regarding students and teachers in written error correction processes with some suggestions. This study shows many teachers might believe written error correction takes a lot of time and effort to give comments. This study indicates many students does not have any problems in rewriting their paper after getting feedback. It might be one main positive point to improve their writing and it might give them self-confidence.

  16. Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-09-01

    Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.

  17. Fringe order error in multifrequency fringe projection phase unwrapping: reason and correction.

    Science.gov (United States)

    Zhang, Chunwei; Zhao, Hong; Zhang, Lu

    2015-11-10

    A multifrequency fringe projection phase unwrapping algorithm (MFPPUA) is important to fringe projection profilometry, especially when a discontinuous object is measured. However, a fringe order error (FOE) may occur when MFPPUA is adopted. An FOE will result in error to the unwrapped phase. Although this kind of phase error does not spread, it brings error to the eventual 3D measurement results. Therefore, an FOE or its adverse influence should be obviated. In this paper, reasons for the occurrence of an FOE are theoretically analyzed and experimentally explored. Methods to correct the phase error caused by an FOE are proposed. Experimental results demonstrate that the proposed methods are valid in eliminating the adverse influence of an FOE.

  18. How EFL students can use Google to correct their “untreatable” written errors

    Directory of Open Access Journals (Sweden)

    Luc Geiller

    2014-09-01

    Full Text Available This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several “untreatable” written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback leads to more grammatical accuracy. In her response to Truscott (1996, Ferris (1999 explains that it would be unreasonable to abolish correction given the present state of knowledge, and that further research needed to focus on which types of errors were more amenable to which types of error correction. In her attempt to respond more effectively to her students’ errors, she made the distinction between “treatable” and “untreatable” ones: the former occur in “a patterned, rule-governed way” and include problems with verb tense or form, subject-verb agreement, run-ons, noun endings, articles, pronouns, while the latter include a variety of lexical errors, problems with word order and sentence structure, including missing and unnecessary words. Substantial research on the use of search engines as a tool for L2 learners has been carried out suggesting that the web plays an important role in fostering language awareness and learner autonomy (e.g. Shei 2008a, 2008b; Conroy 2010. According to Bathia and Richie (2009: 547, “the application of Google for language learning has just begun to be tapped.” Within the framework of this study it was assumed that the students, conversant with digital technologies and using Google and the web on a regular basis, could use various search options and the search results to self-correct their errors instead of relying on their teacher to provide direct feedback. After receiving some in-class training on how to formulate Google queries, the students were asked to use a customized Google search engine limiting searches to 28 information websites to correct up to

  19. Justifications of policy-error correction: a case study of error correction in the Three Mile Island Nuclear Power Plant Accident

    International Nuclear Information System (INIS)

    Kim, Y.P.

    1982-01-01

    The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions

  20. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  1. Response Repetition as an Error-Correction Strategy for Teaching Subtraction Facts

    Science.gov (United States)

    Reynolds, Jennifer L.; Drevon, Daniel D.; Schafer, Bradley; Schwartz, Kaitlyn

    2016-01-01

    This study examined the impact of response repetition as an error-correction strategy in teaching subtraction facts to three students with learning difficulties. Written response repetition (WRR) and oral response repetition (ORR) were compared using an alternating treatments design nested in a multiple baseline design across participants.…

  2. Error correction, co-integration and import demand function for Nigeria

    African Journals Online (AJOL)

    The objective of this study is to determine empirically Import Demand equation in Nigeria using Error Correction and Cointegration techniques. All the variables employed in this study were found stationary at first difference using Augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) unit root test. Empirical evidence from ...

  3. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    Czech Academy of Sciences Publication Activity Database

    Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.

    2013-01-01

    Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error-correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188

  4. Performance calculation of the Compact Disc error correcting code ona memoryless channel

    NARCIS (Netherlands)

    Driessen, L.H.M.E.; Vries, L.B.

    2015-01-01

    Recently N.V. PHILIPS of The Netherlands and SONY CORP. of Japan made a joint pnoposal for standardization of their COMPACT DISC DigitalAudio system. This standard as agreed upon, includes the choice ofan error correcting code called CIRC (Cross Interleave Reed SolomonCode),according to which the

  5. Data compression/error correction digital test system. Appendix 3: Maintenance. Book 2: Receiver assembly drawings

    Science.gov (United States)

    1972-01-01

    The assembly drawings of the receiver unit are presented for the data compression/error correction digital test system. Equipment specifications are given for the various receiver parts, including the TV input buffer register, delta demodulator, TV sync generator, memory devices, and data storage devices.

  6. Inserting Mastered Targets during Error Correction When Teaching Skills to Children with Autism

    Science.gov (United States)

    Plaisance, Lauren; Lerman, Dorothea C.; Laudont, Courtney; Wu, Wai-Ling

    2016-01-01

    Research has identified a variety of effective approaches for responding to errors during discrete-trial training. In one commonly used method, the therapist delivers a prompt contingent on the occurrence of an incorrect response and then re-presents the trial so that the learner has an opportunity to perform the correct response independently.…

  7. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  8. Grammar Instruction and Error Correction: A Matter of Iranian Students' Beliefs

    Science.gov (United States)

    Ganjabi, Mahyar

    2011-01-01

    Introduction: So far the role of grammar instruction and error correction has been mainly analyzed from the teachers' perspectives. However, learners' attitudes can also affect the effectiveness of any type of learning, especially language learning. Therefore, language learners' attitudes and beliefs should also be considered as a determining…

  9. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  10. On the Security of Digital Signature Schemes Based on Error-Correcting Codes

    NARCIS (Netherlands)

    Xu, Sheng-bo; Doumen, J.M.; van Tilborg, Henk

    We discuss the security of digital signature schemes based on error-correcting codes. Several attacks to the Xinmei scheme are surveyed, and some reasons given to explain why the Xinmei scheme failed, such as the linearity of the signature and the redundancy of public keys. Another weakness is found

  11. The dynamics of entry, exit and profitability: an error correction approach for the retail industry

    NARCIS (Netherlands)

    M.A. Carree (Martin); A.R. Thurik (Roy)

    1994-01-01

    textabstractWe develop a two equation error correction model to investigate determinants of and dynamic interaction between changes in profits and number of firms in retailing. An explicit distinction is made between the effects of actual competition among incumbants, new firms competition and

  12. Simple Reed-Solomon Forward Error Correction (FEC) Scheme for FECFRAME

    OpenAIRE

    Roca, Vincent; Cunche, Mathieu; Lacan, Jérôme; Bouabdallah, Amine; Matsuzono, Kazuhisa

    2013-01-01

    Internet Engineering Task Force (IETF) Request for Comments 6865; This document describes a fully-specified simple Forward Error Correction (FEC) scheme for Reed-Solomon codes over the finite field (also known as the Galois Field) GF(2^^m), with 2

  13. Some errors in respirometry of aquatic breathers: How to avoid and correct for them

    DEFF Research Database (Denmark)

    STEFFENSEN, JF

    1989-01-01

    Respirometry in closed and flow-through systems is described with the objective of pointing out problems and sources of errors involved and how to correct for them. Both closed respirometry applied to resting and active animals and intermillent-flow respirometry is described. In addition, flow...

  14. The Use of Corpus Concordancing for Second Language Learners' Self Error-Correction

    Science.gov (United States)

    Feng, Hui-Hsien

    2014-01-01

    Corpus concordancing has been utilized in second language (L2) writing classrooms for a few decades. Some studies have shown that this application is helpful, to a certain degree, to learners' writing process. However, how corpus concordancing is utilized for nonnative speakers' (NNSs) self error-correction in writing, especially the pattern of…

  15. Retesting the Limits of Data-Driven Learning: Feedback and Error Correction

    Science.gov (United States)

    Crosthwaite, Peter

    2017-01-01

    An increasing number of studies have looked at the value of corpus-based data-driven learning (DDL) for second language (L2) written error correction, with generally positive results. However, a potential conundrum for language teachers involved in the process is how to provide feedback on students' written production for DDL. The study looks at…

  16. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  17. Inclinometer Assembly Error Calibration and Horizontal Image Correction in Photoelectric Measurement Systems

    Directory of Open Access Journals (Sweden)

    Xiaofang Kong

    2018-01-01

    Full Text Available Inclinometer assembly error is one of the key factors affecting the measurement accuracy of photoelectric measurement systems. In order to solve the problem of the lack of complete attitude information in the measurement system, this paper proposes a new inclinometer assembly error calibration and horizontal image correction method utilizing plumb lines in the scenario. Based on the principle that the plumb line in the scenario should be a vertical line on the image plane when the camera is placed horizontally in the photoelectric system, the direction cosine matrix between the geodetic coordinate system and the inclinometer coordinate system is calculated firstly by three-dimensional coordinate transformation. Then, the homography matrix required for horizontal image correction is obtained, along with the constraint equation satisfying the inclinometer-camera system requirements. Finally, the assembly error of the inclinometer is calibrated by the optimization function. Experimental results show that the inclinometer assembly error can be calibrated only by using the inclination angle information in conjunction with plumb lines in the scenario. Perturbation simulation and practical experiments using MATLAB indicate the feasibility of the proposed method. The inclined image can be horizontally corrected by the homography matrix obtained during the calculation of the inclinometer assembly error, as well.

  18. Improved HDRG decoders for qudit and non-Abelian quantum error correction

    Science.gov (United States)

    Hutter, Adrian; Loss, Daniel; Wootton, James R.

    2015-03-01

    Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.

  19. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...... in the Fibrinogen Studies Collaboration to assess the relationship between usual levels of plasma fibrinogen and the risk of coronary heart disease, allowing for measurement error in plasma fibrinogen and several confounders Udgivelsesdato: 2009/3/30...

  20. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  1. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  2. Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers.

    Science.gov (United States)

    Mainsah, Boyla O; Morton, Kenneth D; Collins, Leslie M; Sellers, Eric W; Throckmorton, Chandra S

    2015-09-01

    P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies ( > 70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35-185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (-47-0%) . Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44-416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43-433%).

  3. Neural Network Based Real-time Correction of Transducer Dynamic Errors

    Science.gov (United States)

    Roj, J.

    2013-12-01

    In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.

  4. Error Correction of EMTDC Line and Cable Series Impedance Calculations Compared to Traditional Methods

    DEFF Research Database (Denmark)

    Sørensen, Stefan; Nielsen, Hans Ove

    2002-01-01

    In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10%......% deviation to other methods. The correction is done by adjusting he earth return path impedance for the cable model, and will thereby form the basis for a future comparison with measured data from a real full scale earth fault experiment on a mixed line and cable network.......In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10...

  5. Role of Refractive Errors in Inducing Asthenopic Symptoms Among Spectacle Corrected Ammetropes

    Directory of Open Access Journals (Sweden)

    Padma B Prabhu

    2016-04-01

    Full Text Available Refractive errors are a major cause of asthenopic symptoms in young age group. Aim and objectives: This study tries to ascertain the prevalence of refractive errors in a cohort of subjects with spectacle corrected ammetropia and to elucidate the relation between the type, severity and subcategories of refractive errors in such a group. Design: Descriptive cross sectional study Methods: This is a prospective analysis of cases with asthenopia and coexistant significant refractive errors warranting use of spectacles. Best corrected visual acuity of 20/20 was ensured. Retinoscopy readings after complete cycloplegia were noted. Spherical equivalent was calculated from the absolute retinoscopy reading. Ammetropia not fully corrected with spectacles, history of migraine, headache not related to constant near work, symptoms less than three months duration, associated accomodation-convergence anomalies and latent squints were excluded. Results: The study group included thirty five patients. The mean age was 23.48 years; SD 6.97. There were 15 males and 20 females. Twenty seven patients had bilateral symptoms (77.14%. Thirty six subjects (58.08% had a spherical equivalent between 0.25D to 0.75D. The refractive errors included myopia (n-10, hypermetropia (n-26 and astigmatism (n-26. Near work associated headache was observed in 39 patients (62.86%. 46.15% of the cases with near work related headache had uncorrected astigmatism. Conclusion: Asthenopic symptoms are frequent and significant among spectacle corrected ammetropes. Lower degrees of refractive errors are more symptomatic. Hypermetropia and astigmatism constitute the major causative factors.

  6. Wind Power Prediction Based on LS-SVM Model with Error Correction

    Directory of Open Access Journals (Sweden)

    ZHANG, Y.

    2017-02-01

    Full Text Available As conventional energy sources are non-renewable, the world's major countries are investing heavily in renewable energy research. Wind power represents the development trend of future energy, but the intermittent and volatility of wind energy are the main reasons that leads to the poor accuracy of wind power prediction. However, by analyzing the error level at different time points, it can be found that the errors of adjacent time are often approximately the same, the least square support vector machine (LS-SVM model with error correction is used to predict the wind power in this paper. According to the simulation of wind power data of two wind farms, the proposed method can effectively improve the prediction accuracy of wind power, and the error distribution is concentrated almost without deviation. The improved method proposed in this paper takes into account the error correction process of the model, which improved the prediction accuracy of the traditional model (RBF, Elman, LS-SVM. Compared with the single LS-SVM prediction model in this paper, the mean absolute error of the proposed method had decreased by 52 percent. The research work in this paper will be helpful to the reasonable arrangement of dispatching operation plan, the normal operation of the wind farm and the large-scale development as well as fully utilization of renewable energy resources.

  7. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error.

    Science.gov (United States)

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J

    2017-11-01

    Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.

  8. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  9. A two-dimensional matrix correction for off-axis portal dose prediction errors

    International Nuclear Information System (INIS)

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-01-01

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As

  10. Dynamically correcting two-qubit gates against any systematic logical error

    Science.gov (United States)

    Calderon Vargas, Fernando Antonio

    The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.

  11. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  12. Corrected-loss estimation for quantile regression with covariate measurement errors.

    Science.gov (United States)

    Wang, Huixia Judy; Stefanski, Leonard A; Zhu, Zhongyi

    2012-06-01

    We study estimation in quantile regression when covariates are measured with errors. Existing methods require stringent assumptions, such as spherically symmetric joint distribution of the regression and measurement error variables, or linearity of all quantile functions, which restrict model flexibility and complicate computation. In this paper, we develop a new estimation approach based on corrected scores to account for a class of covariate measurement errors in quantile regression. The proposed method is simple to implement. Its validity requires only linearity of the particular quantile function of interest, and it requires no parametric assumptions on the regression error distributions. Finite-sample results demonstrate that the proposed estimators are more efficient than the existing methods in various models considered.

  13. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  14. Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic genome.

    Science.gov (United States)

    Goodwin, Sara; Gurtowski, James; Ethe-Sayers, Scott; Deshpande, Panchajanya; Schatz, Michael C; McCombie, W Richard

    2015-11-01

    Monitoring the progress of DNA molecules through a membrane pore has been postulated as a method for sequencing DNA for several decades. Recently, a nanopore-based sequencing instrument, the Oxford Nanopore MinION, has become available, and we used this for sequencing the Saccharomyces cerevisiae genome. To make use of these data, we developed a novel open-source hybrid error correction algorithm Nanocorr specifically for Oxford Nanopore reads, because existing packages were incapable of assembling the long read lengths (5-50 kbp) at such high error rates (between ∼5% and 40% error). With this new method, we were able to perform a hybrid error correction of the nanopore reads using complementary MiSeq data and produce a de novo assembly that is highly contiguous and accurate: The contig N50 length is more than ten times greater than an Illumina-only assembly (678 kb versus 59.9 kbp) and has >99.88% consensus identity when compared to the reference. Furthermore, the assembly with the long nanopore reads presents a much more complete representation of the features of the genome and correctly assembles gene cassettes, rRNAs, transposable elements, and other genomic features that were almost entirely absent in the Illumina-only assembly. © 2015 Goodwin et al.; Published by Cold Spring Harbor Laboratory Press.

  15. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  16. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  17. Evaluation of Setup Error Correction for Patients Using On Board Imager in Image Guided Radiation Therapy

    International Nuclear Information System (INIS)

    Kang, Soo Man

    2008-01-01

    To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.

  18. Evaluation of Setup Error Correction for Patients Using On Board Imager in Image Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Soo Man [Dept. of Radiation Oncology, Kosin University Gospel Hospital, Busan (Korea, Republic of)

    2008-09-15

    To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.

  19. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  20. In situ correction of field errors induced by temperature gradient in cryogenic undulators

    Directory of Open Access Journals (Sweden)

    Takashi Tanaka

    2009-12-01

    Full Text Available A new technique of undulator field correction for cryogenic permanent magnet undulators (CPMUs is proposed to correct the phase error induced by temperature gradient. This technique takes advantage of two important instruments: one is the in-vacuum self-aligned field analyzer with laser instrumentation system to precisely measure the distribution of the magnetic field generated by the permanent magnet arrays placed in vacuum, and the other is the differential adjuster to correct the local variation of the magnet gap. The details of the two instruments are described together with the method of how to analyze the field measurement data and deduce the gap variation along the undulator axis. The correction technique was applied to the CPMU with a length of 1.7 m and a magnetic period of 14 mm. It was found that the phase error induced during the cooling process was attributable to local gap variations of around 30  μm, which were then corrected by the differential adjuster.

  1. GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vinent [Karlsruhe Inst. of Technology (KIT) (Germany)

    2011-12-14

    In hardware-aware high performance computing, block- asynchronous iteration and mixed precision iterative refinement are two techniques that are applied to leverage the computing power of SIMD accelerators like GPUs. Although they use a very different approach for this purpose, they share the basic idea of compensating the convergence behaviour of an inferior numerical algorithm by a more efficient usage of the provided computing power. In this paper, we want to analyze the potential of combining both techniques. Therefore, we implement a mixed precision iterative refinement algorithm using a block-asynchronous iteration as an error correction solver, and compare its performance with a pure implementation of a block-asynchronous iteration and an iterative refinement method using double precision for the error correction solver. For matrices from theUniversity of FloridaMatrix collection,we report the convergence behaviour and provide the total solver runtime using different GPU architectures.

  2. The Export Supply Model of Bangladesh: An Application of Cointegration and Vector Error Correction Approaches

    Directory of Open Access Journals (Sweden)

    Mahmudul Mannan Toy

    2011-01-01

    Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.

  3. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    Science.gov (United States)

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  4. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  5. The Differential Effect of Two Types of Direct Written Corrective Feedback on Noticing and Uptake: Reformulation vs. Error Correction

    Directory of Open Access Journals (Sweden)

    Rosa M. Manchón

    2010-06-01

    Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la

  6. Modeling Dynamics of Wikipedia: An Empirical Analysis Using a Vector Error Correction Model

    Directory of Open Access Journals (Sweden)

    Liu Feng-Jun

    2017-01-01

    Full Text Available In this paper, we constructed a system dynamic model of Wikipedia based on the co-evolution theory, and investigated the interrelationships among topic popularity, group size, collaborative conflict, coordination mechanism, and information quality by using the vector error correction model (VECM. This study provides a useful framework for analyzing the dynamics of Wikipedia and presents a formal exposition of the VECM methodology in the information system research.

  7. Exports and economic growth in Indonesia's fishery sub-sector: Cointegration and error-correction models

    OpenAIRE

    Sjarif, Indra Nurcahyo; 小谷, 浩示; Lin, Ching-Yang

    2011-01-01

    This paper investigates the causal relationship between fishery's exports and its economic growth in Indonesia by utilizing cointegration and error-correction models. Using annual data from 1969 to 2005, we find the evidence that there exist the long-run relationship as well as bi-directional causality between exports and economic growth in Indonesia's fishery sub-sector. To the best of our knowledge, this is the first research that examine this issue focusing on a natural resource based indu...

  8. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  9. Simple Low-Density Parity Check (LDPC) Staircase Forward Error Correction (FEC) Scheme for FECFRAME

    OpenAIRE

    Roca, Vincent; Cunche, Mathieu; Lacan, Jérôme

    2012-01-01

    Internet Engineering Task Force (IETF) Request for Comments 6816; This document describes a fully specified simple Forward Error Correction (FEC) scheme for Low-Density Parity Check (LDPC) Staircase codes that can be used to protect media streams along the lines defined by FECFRAME. These codes have many interesting properties: they are systematic codes, they perform close to ideal codes in many use-cases, and they also feature very high encoding and decoding throughputs. LDPC-Staircase codes...

  10. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  11. Using Effective Strategies for Errors Correction in EFL Classes: a Case Study of Secondary Public Schools in Benin

    Science.gov (United States)

    Teba, Sourou Corneille

    2017-01-01

    The aim of this paper is firstly, to make teachers correct thoroughly students' errors with effective strategies. Secondly, it is an attempt to find out if teachers are interested themselves in errors correction in Beninese secondary schools. Finally, I would like to point out the effective strategies that an EFL teacher can use for errors…

  12. Improving Oral Reading Fluency through Response Opportunities: A Comparison of Phrase Drill Error Correction with Repeated Readings

    Science.gov (United States)

    Begeny, John C.; Daly, Edward J., III; Valleley, Rachel J.

    2006-01-01

    The purpose of this study was to compare two oral reading fluency treatments (repeated readings and phrase drill error correction) which differ in the way they prompt student responding. Repeated readings (RR) and phrase drill (PD) error correction were alternated with a baseline and a reward condition within an alternating treatments design with…

  13. Effects of Systematic Error Correction and Repeated Readings on the Reading Accuracy and Proficiency of Second Graders with Disabilities

    Science.gov (United States)

    Nelson, Janet S.; Alber, Sheila R.; Gordy, Alicia

    2004-01-01

    This investigation used a multiple-baseline design to examine the effects of systematic error correction and of systematic error correction with repeated readings on the reading accuracy and fluency of four second-graders receiving special education services in a resource room. Three of the students were identified as having learning disabilities,…

  14. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Science.gov (United States)

    Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John

    2015-06-01

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].

  15. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  16. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Herzog, Hans

    2013-01-01

    Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled

  17. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  18. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    Science.gov (United States)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal

  19. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    Energy Technology Data Exchange (ETDEWEB)

    Rota Kops, Elena, E-mail: e.rota.kops@fz-juelich.de [Forschungszentrum Juelich, INM4, Juelich (Germany); Herzog, Hans [Forschungszentrum Juelich, INM4, Juelich (Germany)

    2013-02-21

    Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3

  20. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo....... This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel...... to cross-sectional scans of the fistulas, the major axis was on average 10.2 mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5 mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather...

  1. Error correcting code with chip kill capability and power saving enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY

    2011-08-30

    A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.

  2. Intensity error correction for 3D shape measurement based on phase-shifting method

    Science.gov (United States)

    Chung, Tien-Tung; Shih, Meng-Hung

    2011-12-01

    3D shape measurement based on structured light system is a field of ongoing research for the past two decades. For 3D shape measurement using commercial projector and digital camera, the nonlinear gamma of the projector and the nonlinear response of the camera cause the captured fringes having both intensity and phase errors, and result in large measurement shape error. This paper presents a simple intensity error correction process for the phase-shifting method. First, a white flat board is projected with sinusoidal fringe patterns, and the intensity data is extracted from the captured image. The intensity data is fitted to an ideal sine curve. The difference between the captured curve and the fitted sine curve are used to establish an intensity look-up table (LUT). The LUT is then used to calibrate the intensities of measured object images for establishing 3D object shapes. Research results show that the measurement quality of the 3D shapes is significantly improved.

  3. Robust chemical preservation of digital information on DNA in silica with error-correcting codes.

    Science.gov (United States)

    Grass, Robert N; Heckel, Reinhard; Puddu, Michela; Paunescu, Daniela; Stark, Wendelin J

    2015-02-16

    Information, such as text printed on paper or images projected onto microfilm, can survive for over 500 years. However, the storage of digital information for time frames exceeding 50 years is challenging. Here we show that digital information can be stored on DNA and recovered without errors for considerably longer time frames. To allow for the perfect recovery of the information, we encapsulate the DNA in an inorganic matrix, and employ error-correcting codes to correct storage-related errors. Specifically, we translated 83 kB of information to 4991 DNA segments, each 158 nucleotides long, which were encapsulated in silica. Accelerated aging experiments were performed to measure DNA decay kinetics, which show that data can be archived on DNA for millennia under a wide range of conditions. The original information could be recovered error free, even after treating the DNA in silica at 70 °C for one week. This is thermally equivalent to storing information on DNA in central Europe for 2000 years. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Simulation Model for Correction and Modeling of Probe Head Errors in Five-Axis Coordinate Systems

    Directory of Open Access Journals (Sweden)

    Adam Gąska

    2016-05-01

    Full Text Available Simulative methods are nowadays frequently used in metrology for the simulation of measurement uncertainty and the prediction of errors that may occur during measurements. In coordinate metrology, such methods are primarily used with the typical three-axis Coordinate Measuring Machines (CMMs, and lately, also with mobile measuring systems. However, no similar simulative models have been developed for five-axis systems in spite of their growing popularity in recent years. This paper presents the numerical model of probe head errors for probe heads that are used in five-axis coordinate systems. The model is based on measurements of material standards (standard ring and the use of the Monte Carlo method combined with select interpolation methods. The developed model may be used in conjunction with one of the known models of CMM kinematic errors to form a virtual model of a five-axis coordinate system. In addition, the developed methodology allows for the correction of identified probe head errors, thus improving measurement accuracy. Subsequent verification tests prove the correct functioning of the presented model.

  5. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  6. Evidence that disrupted orienting to evaluative social feedback undermines error correction in rejection sensitive women.

    Science.gov (United States)

    Mangels, Jennifer A; Hoxha, Olta; Lane, Sean P; Jarvis, Shoshana N; Downey, Geraldine

    2017-08-01

    For individuals high in Rejection Sensitivity (RS), a learned orientation to anxiously expect rejection from valued others, negative feedback from social sources may disrupt engagement with learning opportunities, impeding recovery from mistakes. One context in which this disruption may be particularly pronounced is among women high in RS following evaluation by a male in authority. To investigate this prediction, 40 college students (50% female) answered general knowledge questions followed by immediate performance feedback and the correct answer while we recorded event-related potentials. Error correction was measured with a subsequent surprise retest. Performance feedback was either nonsocial (asterisk/tone) or social (male professor's face/voice). Attention and learning were indexed respectively by the anterior frontal P3a (attentional orienting) and a set of negative-going waveforms over left inferior-posterior regions associated with successful encoding. For women, but not men, higher RS scores predicted poorer error correction in the social condition. A path analysis suggested that, for women, high RS disrupted attentional orienting to the social-evaluative performance feedback, which affected subsequent memory for the correct answer by reducing engagement with learning opportunities. These results suggest a mechanism for how social feedback may impede learning among women who are high in RS.

  7. Dealing with Common Mistakes Using an Error Corpus for EFL Students to Increase Their Autonomy in Error Recognition and Correction in Every Day Class Tasks

    Science.gov (United States)

    Terreros Lazo, Oscar

    2012-01-01

    In this article, you will find how autonomous students of EFL in Lima, Peru can be when they recognize and correct their errors based on the teachers' guidance about what to look for and how to do it in a process that I called "Error Hunting" during regular class activities without interfering with these activities.

  8. Practical retrace error correction in non-null aspheric testing: A comparison

    Science.gov (United States)

    Shi, Tu; Liu, Dong; Zhou, Yuhao; Yan, Tianliang; Yang, Yongying; Zhang, Lei; Bai, Jian; Shen, Yibing; Miao, Liang; Huang, Wei

    2017-01-01

    In non-null aspheric testing, retrace error forms the primary error source, making it hard to recognize the desired figure error from the aliasing interferograms. Careful retrace error correction is a must bearing on the testing results. Performance of three commonly employed methods in practical, i.e. the GDI (geometrical deviation based on interferometry) method, the TRW (theoretical reference wavefront) method and the ROR (reverse optimization reconstruction) method, are compared with numerical simulations and experiments. Dynamic range of these methods are sought out and the application is recommended. It is proposed that with aspherical reference wavefront, dynamic range can be further enlarged. Results show that the dynamic range of the GDI method is small while that of the TRW method can be enlarged with aspherical reference wavefront, and the ROR method achieves the largest dynamic range with highest accuracy. It is recommended that the GDI and TRW methods be applied to apertures with small figure error and small asphericity, and the ROR method for commercial and research applications calling for high accuracy and large dynamic range.

  9. Micro-scanning error correction technique for an optical micro-scanning thermal microscope imaging system

    Science.gov (United States)

    Gao, Mei-Jing; Tan, Ai-Ling; Yang, Ming; Xu, Jie; Zu, Zhen-Long; Wang, Jing-Yuan

    2018-01-01

    With optical micro-scanning technology, the spatial resolution of the thermal microscope imaging system can be increased without reducing the size of the detector unit or increasing the detector dimensions. Due to optical micro-scanning error, the four low-resolution images collected by micro-scanning thermal micro- scope imaging system are not standard down-sampled images. The reconstructed image quality is degraded by the direct image interpolation with error, which influences the performance of the system. Therefore, the technique to reduce the system micro-scanning error need to be studied. Based on micro-scanning technology and combined with new edge directed interpolation(NEDI) algorithm, an error correction technique for the micro-scanning instrument is proposed. Simulations and experiments show that the proposed technique can reduce the optical micro-scanning error, improve the imaging effect of the system and improve the systems spatial resolution. It can be applied to other electro-optical imaging systems to improve their resolution.

  10. Bias correction for selecting the minimal-error classifier from many machine learning models.

    Science.gov (United States)

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Single-pixel interior filling function approach for detecting and correcting errors in particle tracking.

    Science.gov (United States)

    Burov, Stanislav; Figliozzi, Patrick; Lin, Binhua; Rice, Stuart A; Scherer, Norbert F; Dinner, Aaron R

    2017-01-10

    We present a general method for detecting and correcting biases in the outputs of particle-tracking experiments. Our approach is based on the histogram of estimated positions within pixels, which we term the single-pixel interior filling function (SPIFF). We use the deviation of the SPIFF from a uniform distribution to test the veracity of tracking analyses from different algorithms. Unbiased SPIFFs correspond to uniform pixel filling, whereas biased ones exhibit pixel locking, in which the estimated particle positions concentrate toward the centers of pixels. Although pixel locking is a well-known phenomenon, we go beyond existing methods to show how the SPIFF can be used to correct errors. The key is that the SPIFF aggregates statistical information from many single-particle images and localizations that are gathered over time or across an ensemble, and this information augments the single-particle data. We explicitly consider two cases that give rise to significant errors in estimated particle locations: undersampling the point spread function due to small emitter size and intensity overlap of proximal objects. In these situations, we show how errors in positions can be corrected essentially completely with little added computational cost. Additional situations and applications to experimental data are explored in SI Appendix In the presence of experimental-like shot noise, the precision of the SPIFF-based correction achieves (and can even exceed) the unbiased Cramér-Rao lower bound. We expect the SPIFF approach to be useful in a wide range of localization applications, including single-molecule imaging and particle tracking, in fields ranging from biology to materials science to astronomy.

  12. Goldmann tonometry tear film error and partial correction with a shaped applanation surface.

    Science.gov (United States)

    McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M

    2018-01-01

    The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.

  13. Correction of clock errors in seismic data using noise cross-correlations

    Science.gov (United States)

    Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline

    2017-04-01

    Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock

  14. Correction for dynamic bias error in transmission measurements of void fraction

    International Nuclear Information System (INIS)

    Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.

    2012-01-01

    Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.

  15. Comparison of word-supply and word-analysis error-correction procedures on oral reading by mentally retarded children.

    Science.gov (United States)

    Singh, J; Singh, N N

    1985-07-01

    An alternating treatments design was used to measure the differential effects of two error-correction procedures (word supply and word analysis) and a no-training control condition on the number of oral-reading errors made by four moderately mentally retarded children. Results showed that when compared to the no-training control condition, both error-correction procedures greatly reduced the number of oral-reading errors of all subjects. The word-analysis method, however, was significantly more effective than was word supply. In terms of collateral behavior, the number of self-corrections of errors increased under both intervention conditions when compared to the baseline and no-training control conditions. For 2 subjects there was no difference in the rate of self-corrections under word analysis and word supply but for the other 2, a greater rate was achieved under word analysis.

  16. Testing corrections for paleomagnetic inclination error in sedimentary rocks: A comparative approach

    Science.gov (United States)

    Tauxe, Lisa; Kodama, Kenneth P.; Kent, Dennis V.

    2008-08-01

    Paleomagnetic inclinations in sedimentary formations are frequently suspected of being too shallow. Recognition and correction of shallow bias is therefore critical for paleogeographical reconstructions. This paper tests the reliability of the elongation/inclination ( E/ I) correction method in several ways. First we consider the E/ I trends predicted by various PSV models. We explored the role of sample size on the reliability of the E/ I estimates and found that for data sets smaller than ˜100-150, the results were less reliable. The Giant Gaussian Process-type paleosecular variation models were all constrained by paleomagnetic data from lava flows of the last five million years. Therefore, to test whether the method can be used in more ancient times, we compare model predictions of E/ I trends with observations from five Large Igneous Provinces since the early Cretaceous (Yemen, Kerguelen, Faroe Islands, Deccan and Paraná basalts). All data are consistent at the 95% level of confidence with the E/ I trends predicted by the paleosecular variation models. The Paraná data set also illustrated the effect of unrecognized tilting and combining data over a large latitudinal spread on the E/ I estimates underscoring the necessity of adhering to the two principle assumptions of the method. Then we discuss the geological implications of various applications of the E/ I method. In general the E/ I corrected data are more consistent with data from contemporaneous lavas, with predictions from the well constrained synthetic apparent polar wander paths, and other geological constraints. Finally, we compare the E/ I corrections with corrections from an entirely different method of inclination correction: the anisotropy of remanence method of Jackson et al. [Jackson, M.J., Banerjee, S.K., Marvin, J.A., Lu, R., Gruber, W., 1991. Detrital remanence, inclination errors and anhysteretic remanence anisotropy: quantitative model and experimental results. Geophys. J. Int. 104, 95

  17. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  18. Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems

    DEFF Research Database (Denmark)

    Li, Bomin

    This PhD thesis addresses the design and application of forward error correction (FEC) in high speed optical communications at the speed of 100 Gb/s and beyond. With the ever growing internet traffic, FEC has been considered as a strong and cost-effective way to improve the quality of transmission...... and their associated experimental demonstration and hardware implementation. The demonstrated high CG, flexibility, robustness and scalability reveal the important role of FEC techniques in the next generation high-speed, high-capacity, high performance and energy-efficient fiber-optic data transmission networks.......-complexity low-power-consumption FEC hardware implementation plays an important role in the next generation energy efficient networks. Thirdly, a joint research is required for FEC integrated applications as the error distribution in channels relies on many factors such as non-linearity in long distance optical...

  19. Identifying and Correcting Timing Errors at Seismic Stations in and around Iran

    International Nuclear Information System (INIS)

    Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee

    2017-01-01

    A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.

  20. Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Yankov, Metodi Plamenov; Berger, Michael Stübert

    2014-01-01

    is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to facilitate......In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity...... the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself...

  1. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    Science.gov (United States)

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  2. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  3. Testing correction for paleomagnetic inclination error in sedimentary rocks: a comparative approach

    Science.gov (United States)

    Tauxe, L.; Kodama, K. P.; Kent, D. V.

    2008-05-01

    Paleomagnetic inclinations in sedimentary formations are frequently suspected of being too shallow. Recognition and correction of shallow bias is therefore critical for paleogeographical reconstructions. The elongation/inclination (E/I) correction method of Tauxe and Kent (2004) relies on the twin assumptions that inclination flattening follows the empirical sedimentary flattening formula and that the distribution of paleomagnetic directions can be predicted from a paleosecular variation (PSV) model. We will test the reliability of the E/I correction method in several ways. First we consider the E/I trends predicted by various PSV models. The Giant Gaussian Process-type paleosecular variation models were all constrained by paleomagnetic data from lava flows of the last five million years. Therefore, to test whether the method can be used in more ancient times, we will compare model predictions of E/I trends with observations from four Large Igneous Provinces since the Jurassic (Yemen, Kerguelen, Faroe Islands, and Deccan basalts). All data are consistent at the 95% level of confidence with the elongation/inclination trends predicted by the paleosecular variation models. Then we will then discuss the geological implications of various applications of the E/I method. In general the E/I corrected data are more consistent with data from contemporaneous lavas, with predictions from the well constrained synthetic apparent polar wander paths, and other geological constraints. Finally, we will compare the E/I corrections with corrections from an entirely different method of inclination correction: the anisotropy of remanence method of Jackson et al. (1991) which relies on measurement of remanence and particle anisotropies of the sediments. In the two cases where a direct comparison can be made, the two methods give corrections that are consistent within error. In summary, it appears that the elongation/inclination method for recognizing and corrected the effects of

  4. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    Science.gov (United States)

    Liebovitch, Larry

    1998-03-01

    evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.

  5. School-based approaches to the correction of refractive error in children.

    Science.gov (United States)

    Sharma, Abhishek; Congdon, Nathan; Patel, Mehul; Gilbert, Clare

    2012-01-01

    The World Health Organization estimates that 13 million children aged 5-15 years worldwide are visually impaired from uncorrected refractive error. School vision screening programs can identify and treat or refer children with refractive error. We concentrate on the findings of various screening studies and attempt to identify key factors in the success and sustainability of such programs in the developing world. We reviewed original and review articles describing children's vision and refractive error screening programs published in English and listed in PubMed, Medline OVID, Google Scholar, and Oxford University Electronic Resources databases. Data were abstracted on study objective, design, setting, participants, and outcomes, including accuracy of screening, quality of refractive services, barriers to uptake, impact on quality of life, and cost-effectiveness of programs. Inadequately corrected refractive error is an important global cause of visual impairment in childhood. School-based vision screening carried out by teachers and other ancillary personnel may be an effective means of detecting affected children and improving their visual function with spectacles. The need for services and potential impact of school-based programs varies widely between areas, depending on prevalence of refractive error and competing conditions and rates of school attendance. Barriers to acceptance of services include the cost and quality of available refractive care and mistaken beliefs that glasses will harm children's eyes. Further research is needed in areas such as the cost-effectiveness of different screening approaches and impact of education to promote acceptance of spectacle-wear. School vision programs should be integrated into comprehensive efforts to promote healthy children and their families. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Correction Model of BeiDou Code Systematic Multipath Errors and Its Impacts on Single-frequency PPP

    Directory of Open Access Journals (Sweden)

    WANG Jie

    2017-07-01

    Full Text Available There are systematic multipath errors on BeiDou code measurements, which are range from several decimeters to larger than 1 meter. They can be divided into two categories, which are systematic variances in IGSO/MEO code measurement and in GEO code measurement. In this contribution, a methodology of correcting BeiDou GEO code multipath is proposed base on Kalman filter algorithm. The standard deviation of GEO MP Series decreases about 10%~16% after correction. The weight of code in single-frequency PPP is great, therefore, code systematic multipath errors have impact on single-frequency PPP. Our analysis indicate that about 1 m bias will be caused by these systematic errors. Then, we evaluated the improvement of single-frequency PPP accuracy after code multipath correction. The systematic errors of GEO code measurements are corrected by applying our proposed Kalman filter method. The systematic errors of IGSO and MEO code measurements are corrected by applying elevation-dependent model proposed by Wanninger and Beer. Ten days observations of four MGEX (Multi-GNSS Experiment stations are processed. The results indicate that the single-frequency PPP accuracy can be improved remarkably by applying code multipath correction. The accuracy in up direction can be improved by 65% after IGSO and MEO code multipath correction. By applying GEO code multipath correction, the accuracy in up direction can be further improved by 15%.

  7. Applying volumetric weather radar data for rainfall runoff modeling: The importance of error correction.

    Science.gov (United States)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.

    2009-04-01

    In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar

  8. Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor

    Directory of Open Access Journals (Sweden)

    Fang Tang

    2014-01-01

    Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.

  9. Phase correction and error estimation in InSAR time series analysis

    Science.gov (United States)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same

  10. A Bayesian ordinal logistic regression model to correct for interobserver measurement error in a geographical oral health study

    OpenAIRE

    LESAFFRE, Emmanuel; Mwalili, Samuel M.; Declerck, Dominique

    2005-01-01

    We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression mode...

  11. Cointegration, error-correction, and the relationship between GDP and energy. The case of South Korea and Singapore

    International Nuclear Information System (INIS)

    Glasure, Yong U.; Lee, Aie-Rie

    1998-01-01

    This paper examines the causality issue between energy consumption and GDP for South Korea and Singapore, with the aid of cointegration and error-correction modeling. Results of the cointegration and error-correction models indicate bidirectional causality between GDP and energy consumption for both South Korea and Singapore. However, results of the standard Granger causality tests show no causal relationship between GDP and energy consumption for South Korea and unidirectional causal relationship from energy consumption to GDP for Singapore

  12. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  13. A correction for emittance-measurement errors caused by finite slit and collector widths

    International Nuclear Information System (INIS)

    Connolly, R.C.

    1992-01-01

    One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs

  14. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Xingming Sun

    2015-07-01

    Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  15. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    Science.gov (United States)

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  16. Writing and Speech Recognition : Observing Error Correction Strategies of Professional Writers

    NARCIS (Netherlands)

    Leijten, M.A.J.C.

    2007-01-01

    In this thesis we describe the organization of speech recognition based writing processes. Writing can be seen as a visual representation of spoken language: a combination that speech recognition takes full advantage of. In the field of writing research, speech recognition is a new writing

  17. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    International Nuclear Information System (INIS)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  18. Psychometric properties of the national eye institute refractive error correction quality-of-life questionnaire among Iranian patients

    Directory of Open Access Journals (Sweden)

    Amir H Pakpour

    2013-01-01

    Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.

  19. Visual outcome after correcting the refractive error of large pupil patients with wavefront-guided ablation

    Directory of Open Access Journals (Sweden)

    Khalifa MA

    2012-12-01

    Full Text Available Mounir A Khalifa,1,2 Waleed A Allam,1,2 Mohamed S Shaheen2,31Ophthalmology Department, Tanta University Eye Hospital, Tanta, Egypt; 2Horus Vision Correction Center, Alexandria, Egypt; 3Ophthalmology Department, Alexandria University, Alexandria, EgyptPurpose: To investigate the efficacy and predictability of wavefront-guided laser in situ keratomileusis (LASIK treatments using the iris registration (IR technology for the correction of refractive errors in patients with large pupils.Setting: Horus Vision Correction Center, Alexandria, Egypt.Methods: Prospective noncomparative study including a total of 52 eyes of 30 consecutive laser refractive correction candidates with large mesopic pupil diameters and myopia or myopic astigmatism. Wavefront-guided LASIK was performed in all cases using the VISX STAR S4 IR excimer laser platform. Visual, refractive, aberrometric and mesopic contrast sensitivity (CS outcomes were evaluated during a 6-month follow-up.Results: Mean mesopic pupil diameter ranged from 8.0 mm to 9.4 mm. A significant improvement in uncorrected distance visual acuity (UCDVA (P < 0.01 was found postoperatively, which was consistent with a significant refractive correction (P < 0.01. No significant change was detected in corrected distance visual acuity (CDVA (P = 0.11. Efficacy index (the ratio of postoperative UCDVA to preoperative CDVA and safety index (the ratio of postoperative CDVA to preoperative CDVA were calculated. Mean efficacy and safety indices were 1.06 ± 0.33 and 1.05 ± 0.18, respectively, and 92.31% of eyes had a postoperative spherical equivalent within ±0.50 diopters (D. Manifest refractive spherical equivalent improved significantly (P < 0.05 from a preoperative level of −3.1 ± 1.6 D (range −6.6 to 0 D to −0.1 ± 0.2 D (range −1.3 to 0.1 D at 6 months postoperative. No significant changes were found in mesopic CS (P ≥ 0.08, except CS for three cycles/degree, which improved significantly (P = 0

  20. Refractive error and vision correction in a general sports-playing population.

    Science.gov (United States)

    Zeri, Fabrizio; Pitzalis, Sabrina; Di Vizio, Assunta; Ruffinatto, Tiziana; Egizi, Fabrizio; Di Russo, Francesco; Armstrong, Richard; Naroo, Shehzad A

    2018-03-01

    To evaluate, in an amateur sports-playing population, the prevalence of refractive error, the type of vision correction used during sport and attitudes toward different kinds of vision correction used in various types of sports. A questionnaire was used for people engaging in sport and data was collected from sport centres, gyms and universities that focused on the motor sciences. One thousand, five hundred and seventy-three questionnaires were collected (mean age 26.5 ± 12.9 years; 63.5 per cent male). Nearly all (93.8 per cent) subjects stated that their vision had been checked at least once. Fifty-three subjects (3.4 per cent) had undergone refractive surgery. Of the remainder who did not have refractive surgery (n = 1,519), 580 (38.2 per cent) reported a defect of vision, 474 (31.2 per cent) were myopic, 63 (4.1 per cent) hyperopic and 241 (15.9 per cent) astigmatic. Logistic regression analysis showed that the best predictors for myopia prevalence were gender (p prevalence of outdoor activity have lower prevalence of myopia. Contact lens penetration over the study sample was 18.7 per cent. Contact lenses were the favourite system of correction among people interviewed compared to spectacles and refractive surgery (p prevalence in the adult population. However, subjects engaging in outdoor sports had lower rates of myopia prevalence. Penetration of contact lens use in sport was four times higher than the overall adult population. Contact lenses were the preferred system of correction in sports compared to spectacles or refractive surgery, but this preference was affected by the type of sport practised and by the age and level of sports activity for which the preference was required. © 2017 Optometry Australia.

  1. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    Science.gov (United States)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  2. The design and use of an error correction information system for NASTRAN

    Science.gov (United States)

    Rosser, D. C., Jr.

    1974-01-01

    Error Correction Information System (ECIS) is a system for a two-way transmittal of NASTRAN maintenance information via a data base stored on a nationwide accessible computer. ECIS consists of two data bases. The first data base is used for comments, reporting NASTRAN Software Problem Reports (SPR's) and bookkeeping information which can be updated by the user or the NASTRAN Office. The second data base is used by the NSMO to store all SPR information and updates. The hardware needed by an accessing user is any desktop computer terminal and a telephone to communicate with the central computer. The instruction format is an engineering oriented language and requires less than an hour to obtain a working knowledge of its functions.

  3. Algebra for applications cryptography, secret sharing, error-correcting, fingerprinting, compression

    CERN Document Server

    Slinko, Arkadii

    2015-01-01

    This book examines the relationship between mathematics and data in the modern world. Indeed, modern societies are awash with data which must be manipulated in many different ways: encrypted, compressed, shared between users in a prescribed manner, protected from an unauthorised access and transmitted over unreliable channels. All of these operations can be understood only by a person with knowledge of basics in algebra and number theory. This book provides the necessary background in arithmetic, polynomials, groups, fields and elliptic curves that is sufficient to understand such real-life applications as cryptography, secret sharing, error-correcting, fingerprinting and compression of information. It is the first to cover many recent developments in these topics. Based on a lecture course given to third-year undergraduates, it is self-contained with numerous worked examples and exercises provided to test understanding. It can additionally be used for self-study.

  4. 'Ancient episteme' and the nature of fossils: a correction of a modern scholarly error.

    Science.gov (United States)

    Jordan, J M

    2016-04-01

    Beginning the nineteenth-century and continuing down to the present, many authors writing on the history of geology and paleontology have attributed the theory that fossils were inorganic formations produced within the earth, rather than by the deposition of living organisms, to the ancient Greeks and Romans. Some have even gone so far as to claim this was the consensus view in the classical period up through the Middle Ages. In fact, such a notion was entirely foreign to ancient and medieval thought and only appeared within the manifold of 'Renaissance episteme,' the characteristics of which have often been projected backwards by some historians onto earlier periods. This paper endeavors to correct this error, explain the development of the Renaissance view, describe certain ancient precedents thereof, and trace the history of the misinterpretation in the literature.

  5. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  6. IDENTIFYING BANK LENDING CHANNEL IN INDONESIA: A VECTOR ERROR CORRECTION APPROACH WITH STRUCTURAL BREAK

    Directory of Open Access Journals (Sweden)

    Akhsyim Afandi

    2017-03-01

    Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.

  7. Precise method of compensating radiation-induced errors in a hot-cathode-ionization gauge with correcting electrode

    Science.gov (United States)

    Saeki, Hiroshi; Magome, Tamotsu

    2014-10-01

    To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method was approximately less than several percent in the pressure range from 10-5 Pa to 10-8 Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.

  8. The use of concept maps to detect and correct concept errors (mistakes

    Directory of Open Access Journals (Sweden)

    Ladislada del Puy Molina Azcárate

    2013-02-01

    Full Text Available This work proposes to detect and correct concept errors (EECC to obtain Meaningful Learning (AS. The Conductive Model does not respond to the demand of meaningful learning that implies gathering thought, feeling and action to lead students up to both compromise and responsibility. In order to respond to the society competition about knowledge and information it is necessary to change the way of teaching and learning (from conductive model to constructive model. In this context it is important not only to learn meaningfully but also to create knowledge so as to developed dissertive, creative and critical thought, and the EECC are and obstacle to cope with this. This study tries to get ride of EECC in order to get meaningful learning. For this, it is essential to elaborate a Teaching Module (MI. This teaching Module implies the treatment of concept errors by a teacher able to change the dynamic of the group in the classroom. This M.I. was used among sixth grade primary school and first grade secondary school in some state assisted schools in the North of Argentina (Tucumán and Jujuy. After evaluation, the results showed great and positive changes among the experimental groups taking into account the attitude and the academic results. Meaningful Learning was shown through pupilʼs creativity, expressions and also their ability of putting this into practice into everyday life.

  9. A forward error correction technique using a high-speed, high-rate single chip codec

    Science.gov (United States)

    Boyd, R. W.; Hartman, W. F.; Jones, Robert E.

    1989-01-01

    The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.

  10. Reduction of the elevator illusion from continued hypergravity exposure and visual error-corrective feedback

    Science.gov (United States)

    Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.

    1996-01-01

    Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.

  11. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  12. Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data

    Directory of Open Access Journals (Sweden)

    Jinhua Han

    2017-01-01

    Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.

  13. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  14. Writing and Speech Recognition : Observing Error Correction Strategies of Professional Writers

    OpenAIRE

    Leijten, M.A.J.C.

    2007-01-01

    In this thesis we describe the organization of speech recognition based writing processes. Writing can be seen as a visual representation of spoken language: a combination that speech recognition takes full advantage of. In the field of writing research, speech recognition is a new writing instrument that may cause a shift in writing process research because the underlying processes are changing. In addition to this, we take advantage of on of the weak points of speech recognition, namely the...

  15. Correcting intensity loss errors in the absence of texture-free reference samples during pole figure measurement

    International Nuclear Information System (INIS)

    Saleh, Ahmed A.; Vu, Viet Q.; Gazder, Azdiar A.

    2016-01-01

    Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration. It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.

  16. Optical correction of refractive error for preventing and treating eye symptoms in computer users.

    Science.gov (United States)

    Heus, Pauline; Verbeek, Jos H; Tikka, Christina

    2018-04-10

    Computer users frequently complain about problems with seeing and functioning of the eyes. Asthenopia is a term generally used to describe symptoms related to (prolonged) use of the eyes like ocular fatigue, headache, pain or aching around the eyes, and burning and itchiness of the eyelids. The prevalence of asthenopia during or after work on a computer ranges from 46.3% to 68.5%. Uncorrected or under-corrected refractive error can contribute to the development of asthenopia. A refractive error is an error in the focusing of light by the eye and can lead to reduced visual acuity. There are various possibilities for optical correction of refractive errors including eyeglasses, contact lenses and refractive surgery. To examine the evidence on the effectiveness, safety and applicability of optical correction of refractive error for reducing and preventing eye symptoms in computer users. We searched the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; Embase; Web of Science; and OSH update, all to 20 December 2017. Additionally, we searched trial registries and checked references of included studies. We included randomised controlled trials (RCTs) and quasi-randomised trials of interventions evaluating optical correction for computer workers with refractive error for preventing or treating asthenopia and their effect on health related quality of life. Two authors independently assessed study eligibility and risk of bias, and extracted data. Where appropriate, we combined studies in a meta-analysis. We included eight studies with 381 participants. Three were parallel group RCTs, three were cross-over RCTs and two were quasi-randomised cross-over trials. All studies evaluated eyeglasses, there were no studies that evaluated contact lenses or surgery. Seven studies evaluated computer glasses with at least one focal area for the distance of the computer screen with or without additional focal areas in presbyopic persons. Six studies compared computer

  17. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  18. Phonetique corrective et methodologie de la recherche du systeme des fautes (Corrective Phonetics and Research Methodology of Error Patterns)

    Science.gov (United States)

    Lebrun, Claire

    1976-01-01

    This article analyzes three studies undertaken to scientifically define error patterns, and outlines a methodology for investigating them. The studies concern native English speakers learning French. (Text is in French.) (CLK)

  19. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  20. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  1. Crosstalk error correction through dynamical decoupling of single-qubit gates in capacitively coupled singlet-triplet semiconductor spin qubits

    Science.gov (United States)

    Buterakos, Donovan; Throckmorton, Robert E.; Das Sarma, S.

    2018-01-01

    In addition to magnetic field and electric charge noise adversely affecting spin-qubit operations, performing single-qubit gates on one of multiple coupled singlet-triplet qubits presents a new challenge: crosstalk, which is inevitable (and must be minimized) in any multiqubit quantum computing architecture. We develop a set of dynamically corrected pulse sequences that are designed to cancel the effects of both types of noise (i.e., field and charge) as well as crosstalk to leading order, and provide parameters for these corrected sequences for all 24 of the single-qubit Clifford gates. We then provide an estimate of the error as a function of the noise and capacitive coupling to compare the fidelity of our corrected gates to their uncorrected versions. Dynamical error correction protocols presented in this work are important for the next generation of singlet-triplet qubit devices where coupling among many qubits will become relevant.

  2. Error Correcting Coding of Telemetry Information for Channel with Random Bit Inversions and Deletions

    Directory of Open Access Journals (Sweden)

    M. A. Elshafey

    2014-01-01

    Full Text Available This paper presents a method of error-correcting coding of digital information. Feature of this method is the treatment of cases of inversion and skip bits caused by a violation of the synchronization of the receiving and transmitting device or other factors. The article gives a brief overview of the features, characteristics, and modern methods of construction LDPC and convolutional codes, as well as considers a general model of the communication channel, taking into account the probability of bits inversion, deletion and insertion. The proposed coding scheme is based on a combination of LDPC coding and convolution coding. A comparative analysis of the proposed combined coding scheme and a coding scheme containing only LDPC coder is performed. Both of the two schemes have the same coding rate. Experiments were carried out on two models of communication channels at different probability values of bit inversion and deletion. The first model allows only random bit inversion, while the other allows both random bit inversion and deletion. In the experiments research and analysis of the delay decoding of convolutional coder is performed and the results of these experimental studies demonstrate the feasibility of planted coding scheme to improve the efficiency of data recovery that is transmitted over a communication channel with noises which allow random bit inversion and deletion without decreasing the coding rate.

  3. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    Science.gov (United States)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  4. Quantum Error Correction: Optimal, Robust, or Adaptive? Or, Where is The Quantum Flyball Governor?

    Science.gov (United States)

    Kosut, Robert; Grace, Matthew

    2012-02-01

    In The Human Use of Human Beings: Cybernetics and Society (1950), Norbert Wiener introduces feedback control in this way: ``This control of a machine on the basis of its actual performance rather than its expected performance is known as feedback ... It is the function of control ... to produce a temporary and local reversal of the normal direction of entropy.'' The classic classroom example of feedback control is the all-mechanical flyball governor used by James Watt in the 18th century to regulate the speed of rotating steam engines. What is it that is so compelling about this apparatus? First, it is easy to understand how it regulates the speed of a rotating steam engine. Secondly, and perhaps more importantly, it is a part of the device itself. A naive observer would not distinguish this mechanical piece from all the rest. So it is natural to ask, where is the all-quantum device which is self regulating, ie, the Quantum Flyball Governor? Is the goal of quantum error correction (QEC) to design such a device? Devloping the computational and mathematical tools to design this device is the topic of this talk.

  5. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  6. Assessment of cassava supply response in Nigeria using vector error correction model (VECM

    Directory of Open Access Journals (Sweden)

    Obayelu Oluwakemi Adeola

    2016-12-01

    Full Text Available The response of agricultural commodities to changes in price is an important factor in the success of any reform programme in agricultural sector of Nigeria. The producers of traditional agricultural commodities, such as cassava, face the world market directly. Consequently, the producer price of cassava has become unstable, which is a disincentive for both its production and trade. This study investigated cassava supply response to changes in price. Data collected from FAOSTAT from 1966 to 2010 were analysed using Vector Error Correction Model (VECM approach. The results of the VECM for the estimation of short run adjustment of the variables toward their long run relationship showed a linear deterministic trend in the data and that Area cultivated and own prices jointly explained 74% and 63% of the variation in the Nigeria cassava output in the short run and long-run respectively. Cassava prices (P<0.001 and land cultivated (P<0.1 had positive influence on cassava supply in the short-run. The short-run price elasticity was 0.38 indicating that price policies were effective in the short-run promotion of cassava production in Nigeria. However, in the long-run elasticity cassava was not responsive to price incentives significantly. This suggests that price policies are not effective in the long-run promotion of cassava production in the country owing to instability in governance and government policies.

  7. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  8. An Experimental Study of Medical Error Explanations: Do Apology, Empathy, Corrective Action, and Compensation Alter Intentions and Attitudes?

    Science.gov (United States)

    Nazione, Samantha; Pace, Kristin

    2015-01-01

    Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.

  9. The Effects of Two Methods of Error Correction on L2 Writing: The Case of Acquisition of the Spanish Preterite and Imperfect

    Science.gov (United States)

    Munoz, Carlos A.

    2011-01-01

    Very often, second language (L2) writers commit the same type of errors repeatedly, despite being corrected directly or indirectly by teachers or peers (Semke, 1984; Truscott, 1996). Apart from discouraging teachers from providing error correction feedback, this also makes them hesitant as to what form of corrective feedback to adopt. Ferris…

  10. The Influence of Educational Programme on Teachers' Error Correction Preferences in the Speaking Skill: Insights from English as a Foreign Language Context

    Science.gov (United States)

    Debreli, Emre; Onuk, Nazife

    2016-01-01

    In the area of language teaching, corrective feedback is one of the popular and hotly debated topics that have been widely explored to date. A considerable number of studies on students' preferences of error correction and the effects of error correction approaches on student achievement do exist. Moreover, much on teachers' preferences of error…

  11. A Study of Students and Teachers' Preferences and Attitudes towards Correction of Classroom Written Errors in Saudi EFL Context

    Science.gov (United States)

    Hamouda, Arafat

    2011-01-01

    It is no doubt that teacher written feedback plays an essential role in teaching writing skill. The present study, by use of questionnaire, investigates Saudi EFL students' and teachers' preferences and attitudes towards written error corrections. The study also aims at identifying the difficulties encountered by teachers and students during the…

  12. A Comparative Study of EFL Teachers' and Intermediate High School Students' Perceptions of Written Corrective Feedback on Grammatical Errors

    Science.gov (United States)

    Jodaie, Mina; Farrokhi, Farahman; Zoghi, Masoud

    2011-01-01

    This study was an attempt to compare EFL teachers' and intermediate high school students' perceptions of written corrective feedback on grammatical errors and also to specify their reasons for choosing comprehensive or selective feedback and some feedback strategies over some others. To collect the required data, the student version of…

  13. Evaluating the Performance Diagnostic Checklist-Human Services to Assess Incorrect Error-Correction Procedures by Preschool Paraprofessionals

    Science.gov (United States)

    Bowe, Melissa; Sellers, Tyra P.

    2018-01-01

    The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…

  14. Forward error correction and its impact on high-data-rate, free-space laser communication system design

    Science.gov (United States)

    Hemmati, F.; Paul, D. K.; Marshalek, R. G.

    1990-07-01

    This paper discusses the use of forward error correction (FEC) in a 300 to 1000 Mbit/s free-space optical communications link. It also considers the tradeoffs involved in applying block codes or convolutional codes, emphasizing the peak and average power limitations of GaAlAs diode laser sources. Direct-detection optical receivers are assumed throughout. The application of FEC technology to a high-data-rate optical communications system is discussed, including available coding gain, correction for both random errors and mispointing-induced burst errors, and electronic implementation difficulties. This is followed by a discussion of the major system benefits derivable from FEC. Consideration is given to using the available coding gain for reducing diode laser source power, aperture size, or fine tracking accuracy. Regarding optical system design, it is most favorable to apply the coding gain toward reducing diode laser power requirements.

  15. Sensory feedback, error correction, and remapping in a multiple oscillator model of place cell activity

    Directory of Open Access Journals (Sweden)

    Joseph D. Monaco

    2011-09-01

    Full Text Available Mammals navigate by integrating self-motion signals (‘path integration’ and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid cells demonstrate a phase relationship with the local theta (6–10 Hz rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of ‘partial remapping’ responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.

  16. Impact of energy technology patents in China: Evidence from a panel cointegration and error correction model

    International Nuclear Information System (INIS)

    Li, Ke; Lin, Boqiang

    2016-01-01

    Enhancing energy technology innovation performance, which is widely measured by energy technology patents through energy technology research and development (R&D) activities, is a fundamental way to implement energy conservation and emission abatement. This study analyzes the effects of R&D investment activities, economic growth, and energy price on energy technology patents in 30 provinces of China over the period 1999–2013. Several unit root tests indicate that all the above variables are generated by panel unit root processes, and a panel cointegration model is confirmed among the variables. In order to ensure the consistency of the estimators, the Fully-Modified OLS (FMOLS) method is adopted, and the results indicate that R&D investment activities and economic growth have positive effects on energy technology patents while energy price has a negative effect. However, the panel error correction models indicate that the cointegration relationship helps to promote economic growth, but it reduces R&D investment and energy price in the short term. Therefore, market-oriented measures including financial support and technical transformation policies for the development of low-carbon energy technologies, an effective energy price mechanism, especially the targeted fossil-fuel subsidies and their die away mode are vital in promoting China's energy technology innovation. - Highlights: • Energy technology patents in China are analyzed. • Relationship between energy patents and funds for R&D activities are analyzed. • China's energy price system hinders energy technology innovation. • Some important implications for China's energy technology policy are discussed. • A panel cointegration model with FMOLS estimator is used.

  17. Correction of an input function for errors introduced with automated blood sampling

    Energy Technology Data Exchange (ETDEWEB)

    Schlyer, D.J.; Dewey, S.L. [Brookhaven National Lab., Upton, NY (United States)

    1994-05-01

    Accurate kinetic modeling of PET data requires an precise arterial plasma input function. The use of automated blood sampling machines has greatly improved the accuracy but errors can be introduced by the dispersion of the radiotracer in the sampling tubing. This dispersion results from three effects. The first is the spreading of the radiotracer in the tube due to mass transfer. The second is due to the mechanical action of the peristaltic pump and can be determined experimentally from the width of a step function. The third is the adsorption of the radiotracer on the walls of the tubing during transport through the tube. This is a more insidious effect since the amount recovered from the end of the tube can be significantly different than that introduced into the tubing. We have measured the simple mass transport using [{sup 18}F]fluoride in water which we have shown to be quantitatively recovered with no interaction with the tubing walls. We have also carried out experiments with several radiotracers including [{sup 18}F]Haloperidol, [{sup 11}C]L-deprenyl, [{sup 18}]N-methylspiroperidol ([{sup 18}F]NMS) and [{sup 11}C]buprenorphine. In all cases there was some retention of the radiotracer by untreated silicone tubing. The amount retained in the tubing ranged from 6% for L-deprenyl to 30% for NMS. The retention of the radiotracer was essentially eliminated after pretreatment with the relevant unlabeled compound. For example less am 2% of the [{sup 18}F]NMS was retained in tubing treated with unlabelled NMS. Similar results were obtained with baboon plasma although the amount retained in the untreated tubing was less in all cases. From these results it is possible to apply a mathematical correction to the measured input function to account for mechanical dispersion and to apply a chemical passivation to the tubing to reduce the dispersion due to adsorption of the radiotracer on the tubing walls.

  18. Multiple Δt strategy for particle image velocimetry (PIV) error correction, applied to a hot propulsive jet

    International Nuclear Information System (INIS)

    Nogueira, J; Lecuona, A; Nauri, S; Legrand, M; Rodríguez, P A

    2009-01-01

    PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ∼0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes

  19. Multiple Δt strategy for particle image velocimetry (PIV) error correction, applied to a hot propulsive jet

    Science.gov (United States)

    Nogueira, J.; Lecuona, A.; Nauri, S.; Legrand, M.; Rodríguez, P. A.

    2009-07-01

    PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ~0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes.

  20. Shipborne Wind Measurement and Motion-induced Error Correction of a Coherent Doppler Lidar over the Yellow Sea in 2014

    Science.gov (United States)

    Zhai, Xiaochun; Wu, Songhua; Liu, Bingyi; Song, Xiaoquan; Yin, Jiaping

    2018-03-01

    Shipborne wind observations by a coherent Doppler lidar (CDL) have been conducted to study the structure of the marine atmospheric boundary layer (MABL) during the 2014 Yellow Sea campaign. This paper evaluates uncertainties associated with the ship motion and presents the correction methodology regarding lidar velocity measurement based on modified 4-Doppler beam swing (DBS) solution. The errors of calibrated measurement, both for the anchored and the cruising shipborne observations, are comparable to those of ground-based measurements. The comparison between the lidar and radiosonde results in a bias of -0.23 ms-1 and a standard deviation of 0.87 ms-1 for the wind speed measurement, and 2.48, 8.84° for the wind direction. The biases of horizontal wind speed and random errors of vertical velocity are also estimated using the error propagation theory and frequency spectrum analysis, respectively. The results show that the biases are mainly related to the measuring error of the ship velocity and lidar pointing error, and the random errors are mainly determined by the signal-to-noise ratio (SNR) of the lidar backscattering spectrum signal. It allows for the retrieval of vertical wind, based on one measurement, with random error below 0.15 ms-1 for an appropriate SNR threshold and bias below 0.02 ms-1. The combination of the CDL attitude correction system and the accurate motion correction process has the potential of continuous long-term high temporal and spatial resolution measurement for the MABL thermodynamic and turbulence process.

  1. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    Directory of Open Access Journals (Sweden)

    Guo Xiao-Mao

    2010-10-01

    Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10

  2. A comparative study of k-spectrum-based error correction methods for next-generation sequencing data analysis.

    Science.gov (United States)

    Akogwu, Isaac; Wang, Nan; Zhang, Chaoyang; Gong, Ping

    2016-07-25

    Innumerable opportunities for new genomic research have been stimulated by advancement in high-throughput next-generation sequencing (NGS). However, the pitfall of NGS data abundance is the complication of distinction between true biological variants and sequence error alterations during downstream analysis. Many error correction methods have been developed to correct erroneous NGS reads before further analysis, but independent evaluation of the impact of such dataset features as read length, genome size, and coverage depth on their performance is lacking. This comparative study aims to investigate the strength and weakness as well as limitations of some newest k-spectrum-based methods and to provide recommendations for users in selecting suitable methods with respect to specific NGS datasets. Six k-spectrum-based methods, i.e., Reptile, Musket, Bless, Bloocoo, Lighter, and Trowel, were compared using six simulated sets of paired-end Illumina sequencing data. These NGS datasets varied in coverage depth (10× to 120×), read length (36 to 100 bp), and genome size (4.6 to 143 MB). Error Correction Evaluation Toolkit (ECET) was employed to derive a suite of metrics (i.e., true positives, false positive, false negative, recall, precision, gain, and F-score) for assessing the correction quality of each method. Results from computational experiments indicate that Musket had the best overall performance across the spectra of examined variants reflected in the six datasets. The lowest accuracy of Musket (F-score = 0.81) occurred to a dataset with a medium read length (56 bp), a medium coverage (50×), and a small-sized genome (5.4 MB). The other five methods underperformed (F-score error correction methods. Thus, efforts have to be paid in choosing appropriate methods for error correction of specific NGS datasets. Based on our comparative study, we recommend Musket as the top choice because of its consistently superior performance across all six testing datasets

  3. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  4. Standard errors and confidence intervals for correlations corrected for indirect range restriction: A simulation study comparing analytic and bootstrap methods.

    Science.gov (United States)

    Kennet-Cohen, Tamar; Kleper, Dvir; Turvall, Elliot

    2018-02-01

    A frequent topic of psychological research is the estimation of the correlation between two variables from a sample that underwent a selection process based on a third variable. Due to indirect range restriction, the sample correlation is a biased estimator of the population correlation, and a correction formula is used. In the past, bootstrap standard error and confidence intervals for the corrected correlations were examined with normal data. The present study proposes a large-sample estimate (an analytic method) for the standard error, and a corresponding confidence interval for the corrected correlation. Monte Carlo simulation studies involving both normal and non-normal data were conducted to examine the empirical performance of the bootstrap and analytic methods. Results indicated that with both normal and non-normal data, the bootstrap standard error and confidence interval were generally accurate across simulation conditions (restricted sample size, selection ratio, and population correlations) and outperformed estimates of the analytic method. However, with certain combinations of distribution type and model conditions, the analytic method has an advantage, offering reasonable estimates of the standard error and confidence interval without resorting to the bootstrap procedure's computer-intensive approach. We provide SAS code for the simulation studies. © 2017 The British Psychological Society.

  5. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors.

    Directory of Open Access Journals (Sweden)

    Zhuo Zhang

    Full Text Available In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose, a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters.

  6. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors

    Science.gov (United States)

    Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce A.; Degteva, Marina; Moroz, Brian; Vostrotin, Vadim; Shiskina, Elena; Birchall, Alan; Stram, Daniel O.

    2017-01-01

    In epidemiological studies, exposures of interest are often measured with uncertainties, which may be independent or correlated. Independent errors can often be characterized relatively easily while correlated measurement errors have shared and hierarchical components that complicate the description of their structure. For some important studies, Monte Carlo dosimetry systems that provide multiple realizations of exposure estimates have been used to represent such complex error structures. While the effects of independent measurement errors on parameter estimation and methods to correct these effects have been studied comprehensively in the epidemiological literature, the literature on the effects of correlated errors, and associated correction methods is much more sparse. In this paper, we implement a novel method that calculates corrected confidence intervals based on the approximate asymptotic distribution of parameter estimates in linear excess relative risk (ERR) models. These models are widely used in survival analysis, particularly in radiation epidemiology. Specifically, for the dose effect estimate of interest (increase in relative risk per unit dose), a mixture distribution consisting of a normal and a lognormal component is applied. This choice of asymptotic approximation guarantees that corrected confidence intervals will always be bounded, a result which does not hold under a normal approximation. A simulation study was conducted to evaluate the proposed method in survival analysis using a realistic ERR model. We used both simulated Monte Carlo dosimetry systems (MCDS) and actual dose histories from the Mayak Worker Dosimetry System 2013, a MCDS for plutonium exposures in the Mayak Worker Cohort. Results show our proposed methods provide much improved coverage probabilities for the dose effect parameter, and noticeable improvements for other model parameters. PMID:28369141

  7. Evaluation of different set-up error corrections on dose-volume metrics in prostate IMRT using CBCT images

    International Nuclear Information System (INIS)

    Hirose, Yoshinori; Tomita, Tsuneyuki; Kitsuda, Kenji; Notogawa, Takuya; Miki, Katsuhito; Nakamura, Mitsuhiro; Nakamura, Kiyonao; Ishigaki, Takashi

    2014-01-01

    We investigated the effect of different set-up error corrections on dose-volume metrics in intensity-modulated radiotherapy (IMRT) for prostate cancer under different planning target volume (PTV) margin settings using cone-beam computed tomography (CBCT) images. A total of 30 consecutive patients who underwent IMRT for prostate cancer were retrospectively analysed, and 7-14 CBCT datasets were acquired per patient. Interfractional variations in dose-volume metrics were evaluated under six different set-up error corrections, including tattoo, bony anatomy, and four different target matching groups. Set-up errors were incorporated into planning the isocenter position, and dose distributions were recalculated on CBCT images. These processes were repeated under two different PTV margin settings. In the on-line bony anatomy matching groups, systematic error (Σ) was 0.3 mm, 1.4 mm, and 0.3 mm in the left-right, anterior-posterior (AP), and superior-inferior directions, respectively. Σ in three successive off-line target matchings was finally comparable with that in the on-line bony anatomy matching in the AP direction. Although doses to the rectum and bladder wall were reduced for a small PTV margin, averaged reductions in the volume receiving 100% of the prescription dose from planning were within 2.5% under all PTV margin settings for all correction groups, with the exception of the tattoo set-up error correction only (≥ 5.0%). Analysis of variance showed no significant difference between on-line bony anatomy matching and target matching. While variations between the planned and delivered doses were smallest when target matching was applied, the use of bony anatomy matching still ensured the planned doses. (author)

  8. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  9. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  10. Precise method of compensating radiation-induced errors in a hot-cathode-ionization gauge with correcting electrode

    Energy Technology Data Exchange (ETDEWEB)

    Saeki, Hiroshi, E-mail: saeki@spring8.or.jp; Magome, Tamotsu, E-mail: saeki@spring8.or.jp [Japan Synchrotron Radiation Research Institute, SPring-8, Kohto 1-1-1, Sayo, Hyogo 679-5198 (Japan)

    2014-10-06

    To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method was approximately less than several percent in the pressure range from 10{sup −5} Pa to 10{sup −8} Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.

  11. Teaching Spoken Spanish

    Science.gov (United States)

    Lipski, John M.

    1976-01-01

    The need to teach students speaking skills in Spanish, and to choose among the many standard dialects spoken in the Hispanic world (as well as literary and colloquial speech), presents a challenge to the Spanish teacher. Some phonetic considerations helpful in solving these problems are offered. (CHK)

  12. Teaching the Spoken Language.

    Science.gov (United States)

    Brown, Gillian

    1981-01-01

    Issues involved in teaching and assessing communicative competence are identified and applied to adolescent native English speakers with low levels of academic achievement. A distinction is drawn between transactional versus interactional speech, short versus long speaking turns, and spoken language influenced or not influenced by written…

  13. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    Science.gov (United States)

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  14. Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector.

    Science.gov (United States)

    Zhang, Song; Yau, Shing-Tung

    2007-01-01

    A structured light system using a digital video projector is widely used for 3D shape measurement. However, the nonlinear gamma of the projector causes the projected fringe patterns to be nonsinusoidal, which results in phase error and therefore measurement error. It has been shown that, by using a small look-up table (LUT), this type of phase error can be reduced significantly for a three-step phase-shifting algorithm. We prove that this algorithm is generic for any phase-shifting algorithm. Moreover, we propose a new LUT generation method by analyzing the captured fringe image of a flat board directly. Experiments show that this error compensation algorithm can reduce the phase error to at least 13 times smaller.

  15. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  16. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2017-11-29

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  17. 16-bit error detection and correction (EDAC) controller design using FPGA for critical memory applications

    International Nuclear Information System (INIS)

    Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.

    2002-01-01

    Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories

  18. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  19. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  20. Hybrid DNA and Enzyme Based Computing for Address Encoding, Link Switching and Error Correction in Molecular Communication

    Science.gov (United States)

    Walsh, Frank; Balasubramaniam, Sasitharan; Botvich, Dmitri; Suda, Tatsuya; Nakano, Tadashi; Bush, Stephen F.; Foghlú, Mícheál Ó.

    This paper proposes a biological cell-based communication protocol to enable communication between biological nanodevices. Inspired by existing communication network protocols, our solution combines two molecular computing techniques (DNA and enzyme computing), to design a protocol stack for molecular communication networks. Based on computational requirements of each layer of the stack, our solution specifies biomolecule address encoding/decoding, error correction and link switching mechanisms for molecular communication networks.

  1. Volume of eggs in the clutches of Grass snake Natrix natrix and Dice snake N. tessellata: error correction

    Directory of Open Access Journals (Sweden)

    Klenina Anastasiya Aleksandrovna

    2015-12-01

    Full Text Available The authors have made a mistake in calculating the volume of eggs in the clutches of snake family Natrix. In this article we correct the error. As a result, it was revealed, that the volume of eggs positively correlates with a female length and its mass, as well as with the quantity of eggs in the clutches. There is a positive correlation between the characteristics of newborn snakes (length and mass and the volume of eggs, from which they hatched.

  2. 5 CFR 839.1304 - Is there anything else I can do if I am not satisfied with the way my error was corrected?

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Is there anything else I can do if I am not satisfied with the way my error was corrected? 839.1304 Section 839.1304 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) CORRECTION OF RETIREMENT COVERAGE ERRORS UNDER THE FEDERAL ERRONEOU...

  3. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    Science.gov (United States)

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  4. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    Science.gov (United States)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  5. The correction of linear lattice gradient errors using an AC dipole

    Energy Technology Data Exchange (ETDEWEB)

    Wang,G.; Bai, M.; Litvinenko, V.N.; Satogata, T.

    2009-05-04

    Precise measurement of optics from coherent betatron oscillations driven by ac dipoles have been demonstrated at RHIC and the Tevatron. For RHIC, the observed rms beta-beat is about 10%. Reduction of beta-beating is an essential component of performance optimization at high energy colliders. A scheme of optics correction was developed and tested in the RHIC 2008 run, using ac dipole optics for measurement and a few adjustable trim quadruples for correction. In this scheme, we first calculate the phase response matrix from the. measured phase advance, and then apply singular value decomposition (SVD) algorithm to the phase response matrix to find correction quadruple strengths. We present both simulation and some preliminary experimental results of this correction.

  6. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  7. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    DEFF Research Database (Denmark)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc

    2012-01-01

    already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy....

  8. Accessing the spoken word

    OpenAIRE

    Goldman, Jerry; Renals, Steve; Bird, Steven; de Jong, Franciska; Federico, Marcello; Fleischhauer, Carl; Kornbluh, Mark; Lamel, Lori; Oard, Douglas W; Stewart, Claire; Wright, Richard

    2005-01-01

    Spoken-word audio collections cover many domains, including radio and television broadcasts, oral narratives, governmental proceedings, lectures, and telephone conversations. The collection, access, and preservation of such data is stimulated by political, economic, cultural, and educational needs. This paper outlines the major issues in the field, reviews the current state of technology, examines the rapidly changing policy issues relating to privacy and copyright, and presents issues relati...

  9. ANALISIS PENGARUH SUKU BUNGA, PENDAPATAN NASIONAL DAN INFLASI TERHADAP NILAI TUKAR NOMINAL : PENDEKATAN DENGAN COINTEGRATION DAN ERROR CORRECTION MODEL (ECM

    Directory of Open Access Journals (Sweden)

    Roosaleh Laksono T.Y.

    2016-04-01

    Full Text Available Abstract. This study aims to analyze the effect of interest rate, inflation, and national income on rupiah exchange rate against dollar both long-term balanced relationship and short-run balance of empirical data from 1980-2015 (36 years using secondary data. The research method used is multiple linear regression methods of OLS. This research method used to approach with cointegration and error correction model (ECM by previously passing some other stages of statistical testing. The results of the study with cointegration (Johansen Cointegration test indicate that all the independent variables (inflation, national income, and interest rate and the non-free variable (exchange rate have a long-term equilibrium relationship, as evidenced by the test results Where the trace statistic value of 102.1727 is much greater than the critical value (5% of 47.85613. In addition, the result of Maximum Eigenvalue Statistic is the result of 36.7908 greater than the critical value of 5%. 27,584434. While the results of the model error correction test (ECM that only variable inflation, interest rates and residual significant, while the variable national income is not significant. This means that the inflation and interest rate variables have a short-run relationship to the exchange rate, it is seen from the Probability (Prob. Value of each variable is 0,05 (5%, besides the residual coefficient on the ECM test result is -0,732447, it shows that error correction term is 73,24% and significant. Keywords: Interest rate; Nasional income; Inflation; Exchange rate; Cointegration; Error Correction Model. Abstrak. Penelitian  ini  bertujuan  untuk  menganalisa  pengaruh  Suku  bunga,  inflasi, dan Pendapatan Nasional terhadap nilai tukar rupiah terhadap dollar baik hubungan keseimbangan jangka panjang maupun keseimbangan jangka pendek data empiris  tahun 1980-2015 (36 tahun dengan menggunakan data sekunder. Metode  penelitian  yang  digunakan adalah regresi

  10. Error-correction learning for artificial neural networks using the Bayesian paradigm. Application to automated medical diagnosis.

    Science.gov (United States)

    Belciug, Smaranda; Gorunescu, Florin

    2014-12-01

    Automated medical diagnosis models are now ubiquitous, and research for developing new ones is constantly growing. They play an important role in medical decision-making, helping physicians to provide a fast and accurate diagnosis. Due to their adaptive learning and nonlinear mapping properties, the artificial neural networks are widely used to support the human decision capabilities, avoiding variability in practice and errors based on lack of experience. Among the most common learning approaches, one can mention either the classical back-propagation algorithm based on the partial derivatives of the error function with respect to the weights, or the Bayesian learning method based on posterior probability distribution of weights, given training data. This paper proposes a novel training technique gathering together the error-correction learning, the posterior probability distribution of weights given the error function, and the Goodman-Kruskal Gamma rank correlation to assembly them in a Bayesian learning strategy. This study had two main purposes; firstly, to develop anovel learning technique based on both the Bayesian paradigm and the error back-propagation, and secondly,to assess its effectiveness. The proposed model performance is compared with those obtained by traditional machine learning algorithms using real-life breast and lung cancer, diabetes, and heart attack medical databases. Overall, the statistical comparison results indicate that thenovellearning approach outperforms the conventional techniques in almost all respects. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Low delay and area efficient soft error correction in arbitration logic

    Science.gov (United States)

    Sugawara, Yutaka

    2013-09-10

    There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.

  12. Correcting errors in a quantum gate with pushed ions via optimal control

    DEFF Research Database (Denmark)

    Poulsen, Uffe Vestergaard; Sklarz, Shlomo; Tannor, David

    2010-01-01

    of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high......We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types...

  13. Nonquarterwave multilayer filters: optical monitoring with a minicomputer allowing correction of thickness errors.

    Science.gov (United States)

    Vidal, B; Pelletier, E

    1979-11-15

    Many spectral filtering problems require assemblies of layers having thicknesses that bear no obvious relationship to one another. After a brief review of the optical methods used to monitor deposition of multilayers containing nonintegral thicknesses, we show that the performance of monitoring systems can be improved further by including the real-time calculation of any necessary layer thickness changes that may be required to compensate any errors that might still occur. The apparatus described consists of a minicomputer coupled to a rapid-scanning spectrometer. Such a procedure working in real time avoids the cumulative effects of successive errors. The technique is demonstrated in the production of a beam splitter.

  14. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error.

    Science.gov (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi

    2015-10-01

    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The Role of Extensive Recasts in Error Detection and Correction by Adult ESL Students

    Science.gov (United States)

    Hawkes, Laura; Nassaji, Hossein

    2016-01-01

    Most of the laboratory studies on recasts have examined the role of intensive recasts provided repeatedly on the same target structure. This is different from the original definition of recasts as the reformulation of learner errors as they occur naturally and spontaneously in the course of communicative interaction. Using a within-group research…

  16. Correcting the Errors in the Writing of University Students in the Comfortable Atmosphere

    Science.gov (United States)

    Lu, Tuanhua

    2010-01-01

    This paper analyzed the common errors in university students' writing. At the same time, it showed some methods based on activities designed to give students practice in these problem areas. The activities are meant to be carried out in a comfortable, non-threatening atmosphere in which students can make positive steps toward reducing their errors…

  17. Error correction in bimanual coordination benefits from bilateral muscle activity: evidence from kinesthetic tracking

    NARCIS (Netherlands)

    Ridderikhoff, A.; Peper, C.E.; Beek, P.J.

    2007-01-01

    Although previous studies indicated that the stability properties of interlimb coordination largely result from the integrated timing of efferent signals to both limbs, they also depend on afference-based interactions. In the present study, we examined contributions of afference-based error

  18. Covariate Measurement Error Correction for Student Growth Percentiles Using the SIMEX Method

    Science.gov (United States)

    Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W.

    2015-01-01

    In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…

  19. Learning Correct Responses and Errors in the Hebb Repetition Effect: Two Faces of the Same Coin

    Science.gov (United States)

    Couture, Mathieu; Lafond, Daniel; Tremblay, Sebastien

    2008-01-01

    In a serial recall task, the "Hebb repetition effect" occurs when recall performance improves for a sequence repeated throughout the experimental session. This phenomenon has been replicated many times. Nevertheless, such cumulative learning seldom leads to perfect recall of the whole sequence, and errors persist. Here the authors report…

  20. Early insulin response and insulin sensitivity are equally important as predictors of glucose tolerance after correction for measurement errors.

    Science.gov (United States)

    Berglund, Lars; Berne, Christian; Svärdsudd, Kurt; Garmo, Hans; Zethelius, Björn

    2009-12-01

    We estimated measurement error (ME) corrected effects of insulin sensitivity (M/I), from euglycaemic insulin clamp, and insulin secretion, measured as early insulin response (EIR) from oral glucose tolerance test (OGTT), on fasting plasma glucose, HbA1c and type 2 diabetes longitudinally and cross-sectional. In a population-based study (n=1128 men) 17 men made replicate measurements to estimate ME at age 71 years. Effect of 1 SD decrease of predictors M/I and EIR on longitudinal response variables fasting plasma glucose (FPG) and HbA1c at follow-ups up to 11 years, were estimated using uncorrected and ME-corrected (with the regression calibration method) regression models. Uncorrected effect on FPG at age 77 years was larger for M/I than for EIR (effect difference 0.10mmol/l, 95% CI 0.00;0.21), while ME-corrected effects were similar (0.02mmol/l, 95% CI -0.13;0.15mmol/l). EIR had greater ME-corrected impact than M/I on HbA1c at age 82 years (-0.11%, -0.28; -0.01%). Due to higher ME effect of EIR on glycaemia is underestimated as compared with M/I. By correcting for ME valid estimates of relative contributions of insulin secretion and insulin sensitivity on glycaemia are obtained.

  1. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    Science.gov (United States)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  2. Correction of longitudinal errors in accelerators for heavy-ion fusion

    International Nuclear Information System (INIS)

    Sharp, W.M.; Callahan, D.A.; Barnard, J.J.; Langdon, A.B.; Fessenden, T.J.

    1993-01-01

    Longitudinal space-charge waves develop on a heavy-ion inertial-fusion pulse from initial mismatches or from inappropriately timed or shaped accelerating voltages. Without correction, waves moving backward along the beam can grow due to the interaction with their resistivity retarded image fields, eventually degrading the longitudinal emittance. A simple correction algorithm is presented here that uses a time-dependent axial electric field to reverse the direction of backward-moving waves. The image fields then damp these forward-moving waves. The method is demonstrated by fluid simulations of an idealized inertial-fusion driver, and practical problems in implementing the algorithm are discussed

  3. Diagnosing and Correcting Mass Accuracy and Signal Intensity Error Due to Initial Ion Position Variations in a MALDI TOFMS

    Science.gov (United States)

    Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.

    2017-12-01

    Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.

  4. Evidence on the Effectiveness of Comprehensive Error Correction in Second Language Writing

    Science.gov (United States)

    Van Beuningen, Catherine G.; De Jong, Nivja H.; Kuiken, Folkert

    2012-01-01

    This study investigated the effect of direct and indirect comprehensive corrective feedback (CF) on second language (L2) learners' written accuracy (N = 268). The study set out to explore the value of CF as a revising tool as well as its capacity to support long-term accuracy development. In addition, we tested Truscott's (e.g., 2001, 2007) claims…

  5. Evidence on the effectiveness of comprehensive error correction in second language writing

    NARCIS (Netherlands)

    van Beuningen, C.G.; de Jong, N.H.; Kuiken, F.

    2012-01-01

    This study investigated the effect of direct and indirect comprehensive corrective feedback (CF) on second language (L2) learners’ written accuracy (N = 268). The study set out to explore the value of CF as a revising tool as well as its capacity to support long-term accuracy development. In

  6. On the Effects of Error Correction Strategies on the Grammatical Accuracy of the Iranian English Learners

    Science.gov (United States)

    Aliakbari, Mohammad; Toni, Arman

    2009-01-01

    Writing, as a productive skill, requires an accurate in-depth knowledge of the grammar system, language form and sentence structure. The emphasis on accuracy is justified in the sense that it can lead to the production of structurally correct instances of second language, and to prevent inaccuracy that may result in the production of structurally…

  7. 48 CFR 22.404-7 - Correction of wage determinations containing clerical errors.

    Science.gov (United States)

    2010-10-01

    ...), except that for contract modifications to exercise an option to extend the term of the contract, the... FEDERAL ACQUISITION REGULATION SOCIOECONOMIC PROGRAMS APPLICATION OF LABOR LAWS TO GOVERNMENT ACQUISITIONS Labor Standards for Contracts Involving Construction 22.404-7 Correction of wage determinations...

  8. FPGA based Novel High Speed DAQ System Design with Error Correction

    OpenAIRE

    Mandal, Swagata; Sau, Suman; Chakrabarti, Amlan; Saini, Jogendra; Pal, Sushanta Kumar; Chattopadhyay, Subhasish

    2015-01-01

    Present state of the art applications in the area of high energy physics experiments (HEP), radar communication, satellite communication and bio medical instrumentation require fault resilient data acquisition (DAQ) system with the data rate in the order of Gbps. In order to keep the high speed DAQ system functional in such radiation environment where direct intervention of human is not possible, a robust and error free communication system is necessary. In this work we present an efficient D...

  9. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  10. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  11. POSSIBILITIES TO CORRECT ACCOUNTING ERRORS IN THE CONTEXT OF COMPLYING WITH THE OPENING BALANCE SHEET INTANGIBILITY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    PALIU – POPA LUCIA

    2017-12-01

    Full Text Available There are still different views on the intangibility of the opening balance sheet at global level in the process of convergence and accounting harmonization. Fnding a total difference between the Anglo-Saxon accounting system and that of the Western European continental influence, in the sense that the former is less rigid in regard with the application of the principle of intangibility, whereas that of mainland inspiration apply the provisions of this principle in its entirety. Looking from this perspective and taking into account the major importance of the financial statements that are intended to provide information for all categories of users, ie both for managers and users external to the entity whose position does not allow them to request specific reports, we considered useful to conduct a study aimed at correcting the errors in the context of compliance with the opening balance sheet intangibility principle versus the need to adjust the comparative information on the financial position, financial performance and change in the financial position generated by the correction of the errors in the previous years. In this regard, we will perform a comparative analysis of the application of the intangibility principle both in the two major accounting systems and at international level and we will approach issues related to the correction of the errors in terms of the main differences between the provisions of the continental accounting regulations (represented by the European and national ones in our approach, Anglo-Saxon and those of the international referential on the opening balance sheet intangibility.

  12. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    Science.gov (United States)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  13. Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.

    Science.gov (United States)

    Mayer, István; Bakó, Imre

    2017-05-09

    The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.

  14. SPOKEN CORPORA: RATIONALE AND APPLICATION

    Directory of Open Access Journals (Sweden)

    John Newman

    2008-12-01

    Full Text Available Despite the abundance of electronic corpora now available to researchers, corpora of natural speech are still relatively rare and relatively costly. This paper suggests reasons why spoken corpora are needed, despite the formidable problems of construction. The multiple purposes of such corpora and the involvement of very different kinds of language communities in such projects mean that there is no one single blueprint for the design, markup, and distribution of spoken corpora. A number of different spoken corpora are reviewed to illustrate a range of possibilities for the construction of spoken corpora.

  15. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  16. Study on fault diagnosis method for nuclear power plant based on hadamard error-correcting output code

    Science.gov (United States)

    Mu, Y.; Sheng, G. M.; Sun, P. N.

    2017-05-01

    The technology of real-time fault diagnosis for nuclear power plants(NPP) has great significance to improve the safety and economy of reactor. The failure samples of nuclear power plants are difficult to obtain, and support vector machine is an effective algorithm for small sample problem. NPP is a very complex system, so in fact the type of NPP failure may occur very much. ECOC is constructed by the Hadamard error correction code, and the decoding method is Hamming distance method. The base models are established by lib-SVM algorithm. The result shows that this method can diagnose the faults of the NPP effectively.

  17. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    Science.gov (United States)

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Correction of errors in scale values for magnetic elements for Helsinki

    Directory of Open Access Journals (Sweden)

    L. Svalgaard

    2014-06-01

    Full Text Available Using several lines of evidence we show that the scale values of the geomagnetic variometers operating in Helsinki in the 19th century were not constant throughout the years of operation 1844–1897. Specifically, the adopted scale value of the horizontal force variometer appears to be too low by ~ 30% during the years 1866–1874.5 and the adopted scale value of the declination variometer appears to be too low by a factor of ~ 2 during the interval 1885.8–1887.5. Reconstructing the heliospheric magnetic field strength from geomagnetic data has reached a stage where a reliable reconstruction is possible using even just a single geomagnetic data set of hourly or daily values. Before such reconstructions can be accepted as reliable, the underlying data must be calibrated correctly. It is thus mandatory that the Helsinki data be corrected. Such correction has been satisfactorily carried out and the HMF strength is now well constrained back to 1845.

  19. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    Science.gov (United States)

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  20. A correction scheme for the hexapolar error of an ion beam extracted from an ECRIS

    International Nuclear Information System (INIS)

    Spaedtke, P.; Lang, R.; Maeder, J.; Maimone, F.; Rossbach, J.; Tinschert, K.

    2012-01-01

    The extraction of any ion beam from ECRIS is determined by the good confinement of such ion sources. It has been shown earlier, that the ions are coming from the places, where the confinement is the weakest. The assumption that the low energy ions are strongly bound to the magnetic field lines require furthermore, that only these ions starting on a magnetic field line going through the extraction aperture can be extracted. Depending on the setting of the magnetic field, these field lines may come from the loss lines at plasma chamber radius. Because the longitudinal position of these field lines depends on the azimuthal position at the extraction electrode, the ions are extracted from different magnetic flux densities. Whereas the solenoidal component can only be transferred into another phase space projection, the hexapolar component can be compensated by an additional hexapole after the first beam line focusing solenoid. The hexapole has to be rotatable in azimuthal direction and moveable in longitudinal direction. For a good correction the beam needs to have such a radial phase space distribution, that the force given by this hexapole acts on the aberrated beam exactly in such a way that it creates a linear distribution after that correction. The paper is followed by the slides of the presentation. (authors)

  1. Partial correction of a severe molecular defect in hemophilia A, because of errors during expression of the factor VIII gene

    Energy Technology Data Exchange (ETDEWEB)

    Young, M.; Antonarakis, S.E. [Univ. of Geneva (Switzerland); Inaba, Hiroshi [Tokyo Medical College (Japan)] [and others

    1997-03-01

    Although the molecular defect in patients in a Japanese family with mild to moderately severe hemophilia A was a deletion of a single nucleotide T within an A{sub 8}TA{sub 2} sequence of exon 14 of the factor VIII gene, the severity of the clinical phenotype did not correspond to that expected of a frameshift mutation. A small amount of functional factor VIII protein was detected in the patient`s plasma. Analysis of DNA and RNA molecules from normal and affected individuals and in vitro transcription/translation suggested a partial correction of the molecular defect, because of the following: (i) DNA replication/RNA transcription errors resulting in restoration of the reading frame and/or (ii) {open_quotes}ribosomal frameshifting{close_quotes} resulting in the production of normal factor VIII polypeptide and, thus, in a milder than expected hemophilia A. All of these mechanisms probably were promoted by the longer run of adenines, A{sub 10} instead of A{sub 8}TA{sub 2}, after the delT. Errors in the complex steps of gene expression therefore may partially correct a severe frameshift defect and ameliorate an expected severe phenotype. 36 refs., 6 figs.

  2. PET/CT image fusion error due to urinary bladder filling changes: consequence and correction.

    Science.gov (United States)

    Heiba, Sherif I; Raphael, Barbara; Castellon, Ivan; Altinyay, Erkan; Sandella, Nick; Rosen, Gerald; Abdel-Dayem, Hussein M

    2009-10-01

    A considerable change of urinary bladder (UB) shape in PET compared with CT in integrated PET/CT system is frequently noted. This study initially evaluated this finding with and without oral contrast (OC) use. In addition, a one bed pelvic section (PLV) repeat acquisition was investigated as a solution to this problem. (18)FDG PET/CTs of 88 patients were analyzed. OC was administered in 68 patients, of whom 31 had PLV images taken 5-10 min later. Three-dimensional mid-UB CT and PET matching measurements were compared. In addition, UB walls displacement between CT and PET were analyzed. The mean UB height was significantly increased (P errors of UB can be substantially resolved through a separate PLV acquisition presumably due to the shorter time interval of UB scan completion between CT and PET.

  3. The inguinal ligament and its lateral attachments: correcting an anatomical error.

    Science.gov (United States)

    Acland, Robert D

    2008-01-01

    The inguinal portions of the internal oblique and transversus abdominis muscles are generally described as arising from the inguinal ligament. Previous authors have shown that this description is incorrect. A new dissection study in 15 lightly embalmed cadavers confirms that in reality the inguinal portions of these muscles arise from a thickened strip of ilipsoas fascia that forms the superolateral part of the ilio-pectineal arch. Details are given of a new dissection technique that fully exposes the deep aspect of the inguinal ligament, without disrupting its continuity. The historical background of the persistent textbook error is explored. It originated at a time when there was widespread descriptive and semantic confusion regarding the structure now known as the inguinal ligament. (c) 2007 Wiley-Liss, Inc.

  4. Feedback correction of injection errors using digital signal-processing techniques

    Directory of Open Access Journals (Sweden)

    N. S. Sereno

    2007-01-01

    Full Text Available Efficient transfer of electron beams from one accelerator to another is important for 3rd-generation light sources that operate using top-up. In top-up mode, a constant amount of charge is injected at regular intervals into the storage ring to replenish beam lost primarily due to Touschek scattering. Top-up therefore requires that the complex of injector accelerators that fill the storage ring transport beam with a minimum amount of loss. Injection can be a source of significant beam loss if not carefully controlled. In this note we describe a method of processing injection transient signals produced by beam-position monitors and using the processed data in feedback. Feedback control using the technique described here has been incorporated in the Advanced Photon Source (APS booster synchrotron to correct injection transients.

  5. Children's Early Awareness of Comprehension as Evident in Their Spontaneous Corrections of Speech Errors.

    Science.gov (United States)

    Wellman, Henry M; Song, Ju-Hyun; Peskin-Shepherd, Hope

    2017-06-09

    A crucial human cognitive goal is to understand and to be understood. But understanding often takes active management. Two studies investigated early developmental processes of understanding management by focusing on young children's comprehension monitoring. We ask: When and how do young children actively monitor their comprehension of social-communicative interchanges and so seek to clarify and correct their own potential miscomprehension? Study 1 examined the parent-child conversations of 13 children studied longitudinally in everyday situations from the time the children were approximately 2 years through 3 years. Study 2 used a seminaturalistic situation in the laboratory to address these questions with more precision and control with 36 children aged 2-3 years. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  6. Hamiltonian formulation of quantum error correction and correlated noise: Effects of syndrome extraction in the long-time limit

    Science.gov (United States)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2008-07-01

    We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.

  7. Comparing sports vision among three groups of soft tennis adolescent athletes: Normal vision, refractive errors with and without correction

    Directory of Open Access Journals (Sweden)

    Shih-Tsun Chang

    2015-01-01

    Full Text Available Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV], were different among soft tennis adolescent athletes with normal vision (Group A, with refractive error and corrected with (Group B and without eyeglasses (Group C. Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse to 10 (best using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021. PV displayed significant difference among the three study groups (P = 0.0044. There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.

  8. Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors

    Science.gov (United States)

    Tarinejad, Reza; Damadipour, Majid

    2016-05-01

    In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.

  9. Correction of thickness measurement errors for two adjacent sheet structures in MR images

    International Nuclear Information System (INIS)

    Cheng Yuanzhi; Wang Shuguo; Sato, Yoshinobu; Nishii, Takashi; Tamura, Shinichi

    2007-01-01

    We present a new method for measuring the thickness of two adjacent sheet structures in MR images. In the hip joint, in which the femoral and acetabular cartilages are adjacent to each other, a conventional measurement technique based on the second derivative zero crossings (called the zero-crossings method) can introduce large underestimation errors in measurements of cartilage thickness. In this study, we have developed a model-based approach for accurate thickness measurement. We model the imaging process for two adjacent sheet structures, which simulate the two articular cartilages in the hip joint. This model can be used to predict the shape of the intensity profile along the sheet normal orientation. Using an optimization technique, the model parameters are adjusted to minimize the differences between the predicted intensity profile and the actual intensity profiles observed in the MR data. The set of model parameters that minimize the difference between the model and the MR data yield the thickness estimation. Using three phantoms and one normal cadaveric specimen, the usefulness of the new model-based method is demonstrated by comparing the model-based results with the results generated using the zero-crossings method. (author)

  10. Model-Based Angular Scan Error Correction of an Electrothermally-Actuated MEMS Mirror.

    Science.gov (United States)

    Zhang, Hao; Xu, Dacheng; Zhang, Xiaoyang; Chen, Qiao; Xie, Huikai; Li, Suiqiong

    2015-12-10

    In this paper, the actuation behavior of a two-axis electrothermal MEMS (Microelectromechanical Systems) mirror typically used in miniature optical scanning probes and optical switches is investigated. The MEMS mirror consists of four thermal bimorph actuators symmetrically located at the four sides of a central mirror plate. Experiments show that an actuation characteristics difference of as much as 4.0% exists among the four actuators due to process variations, which leads to an average angular scan error of 0.03°. A mathematical model between the actuator input voltage and the mirror-plate position has been developed to predict the actuation behavior of the mirror. It is a four-input, four-output model that takes into account the thermal-mechanical coupling and the differences among the four actuators; the vertical positions of the ends of the four actuators are also monitored. Based on this model, an open-loop control method is established to achieve accurate angular scanning. This model-based open loop control has been experimentally verified and is useful for the accurate control of the mirror. With this control method, the precise actuation of the mirror solely depends on the model prediction and does not need the real-time mirror position monitoring and feedback, greatly simplifying the MEMS control system.

  11. Effect of error field correction coils on W7-X limiter loads

    Science.gov (United States)

    Bozhenkov, S. A.; Jakubowski, M. W.; Niemann, H.; Lazerson, S. A.; Wurden, G. A.; Biedermann, C.; Kocsis, G.; König, R.; Pisano, F.; Stephey, L.; Szepesi, T.; Wenzel, U.; Pedersen, T. S.; Wolf, R. C.; W7-X Team

    2017-12-01

    In the first campaign Wendelstein 7-X was operated with five poloidal graphite limiters installed stellarator symmetrically. In an ideal situation the power losses would be equally distributed between the limiters. The limiter shape was designed to smoothly distribute the heat flux over two strike lines. Vertically the strike lines are not uniform because of different connection lengths. In this paper it is demonstrated both numerically and experimentally that the heat flux distribution can be significantly changed by non-resonant n=1 perturbation field of the order of 10-4 . Numerical studies are performed with field line tracing. In experiments perturbation fields are excited with five error field trim coils. The limiters are diagnosed with infrared cameras, neutral gas pressure gauges, thermocouples and spectroscopic diagnostics. Experimental results are qualitatively consistent with the simulations. With a suitable choice of the phase and amplitude of the perturbation a more symmetric plasma-limiter interaction can be potentially achieved. These results are also of interest for the later W7-X divertor operation.

  12. Status update of the effort to correct the SDO/HMI systemmatic errors in Doppler velocity and derived data products

    Science.gov (United States)

    Scherrer, Philip H.

    2017-08-01

    This poster provides an update of the status of the efforts to understand and correct the leakage of the SDO orbit velocity into most HMI data products. The following is extracted from the abstract for the similar topic presented at the 2016 SPD meeting: “The Helioseismic and Magnetic Imager (HMI) instrument on the Solar Dynamics Observatory (SDO) measures sets of filtergrams which are converted into velocity and magnetic field maps. In addition to solar photospheric motions the velocity measurements include a direct component from the line-of-sight component of the SDO orbit. Since the magnetic field is computed as the difference between the velocity measured in left and right circular polarization, the orbit velocity is canceled only if the velocity is properly calibrated. When the orbit velocity is subtracted the remaining "solar" velocity shows a residual signal which is equal to about 2% of the c. +- 3000 m/s orbit velocity in a nearly linear relationship. This implies an error in our knowledge of some of the details of as-built filter components. This systematic error is the source of 12- and 24-hour variations in most HMI data products. While the instrument as presently calibrated (Couvidat et al. 2012 and 2016) meets all of the “Level-1” mission requirements it fails to meet the stated goal of 10 m/s accuracy for velocity data products. For the velocity measurements this has not been a significant problem since the prime HMI goals of obtaining data for helioseismology are not affected by this systematic error. However the orbit signal leaking into the magnetograms and vector magnetograms degrades the ability to accomplish some of the mission science goals at the expected levels of accuracy. This poster presents the current state of understanding of the source of this systematic error and prospects for near term improvement in the accuracy of the filter profile model.”

  13. Correcting transport errors during advection of aerosol and cloud moment sequences in eulerian models

    Energy Technology Data Exchange (ETDEWEB)

    McGraw R.

    2012-03-01

    Moment methods are finding increasing usage for simulations of particle population balance in box models and in more complex flows including two-phase flows. These highly efficient methods have nevertheless had little impact to date for multi-moment representation of aerosols and clouds in atmospheric models. There are evidently two reasons for this: First, atmospheric models, especially if the goal is to simulate climate, tend to be extremely complex and take many man-years to develop. Thus there is considerable inertia to the implementation of novel approaches. Second, and more fundamental, the nonlinear transport algorithms designed to reduce numerical diffusion during advection of various species (tracers) from cell to cell, in the typically coarse grid arrays of these models, can and occasionally do fail to preserve correlations between the moments. Other correlated tracers such as isotopic abundances, composition of aerosol mixtures, hydrometeor phase, etc., are subject to this same fate. In the case of moments, this loss of correlation can and occasionally does give rise to unphysical moment sets. When this happens the simulation can come to a halt. Following a brief description and review of moment methods, the goal of this paper is to present two new approaches that both test moment sequences for validity and correct them when they fail. The new approaches work on individual grid cells without requiring stored information from previous time-steps or neighboring cells.

  14. Development of a new error field correction coil (C-coil) for DIII-D

    International Nuclear Information System (INIS)

    Robinson, J.I.; Scoville, J.T.

    1995-12-01

    The C-coil recently installed on the DIII-D tokamak was developed to reduce the error fields created by imperfections in the location and geometry of the existing coils used to confine, heat, and shape the plasma. First results from C-coil experiments include stable operation in a 1.6 MA plasma with a density less than 1.0 x 10 13 cm -3 , nearly a factor of three lower density than that achievable without the C-coil. The C-coil has also been used in magnetic braking of the plasma rotation and high energy particle confinement experiments. The C-coil system consists of six individual saddle coils, each 60 degree wide toroidally, spanning the midplane of the vessel with a vertical height of 1.6 m. The coils are located at a major radius of 3.2 m, just outside of the toroidal field coils. The actual shape and geometry of each coil section varied somewhat from the nominal dimensions due to the large number of obstructions to the desired coil path around the already crowded tokamak. Each coil section consists of four turns of 750 MCM insulated copper cable banded with stainless steel straps within the web of a 3 in. x 3 in. stainless steel angle frame. The C-coil structure was designed to resist peak transient radial forces (up to 1,800 Nm) exerted on the coil by the toroidal and ploidal fields. The coil frames were supported from existing poloidal field coil case brackets, coil studs, and various other structures on the tokamak

  15. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  16. Bandwidth efficient bidirectional 5 Gb/s overlapped-SCM WDM PON with electronic equalization and forward-error correction.

    Science.gov (United States)

    Buset, Jonathan M; El-Sahn, Ziad A; Plant, David V

    2012-06-18

    We demonstrate an improved overlapped-subcarrier multiplexed (O-SCM) WDM PON architecture transmitting over a single feeder using cost sensitive intensity modulation/direct detection transceivers, data re-modulation and simple electronics. Incorporating electronic equalization and Reed-Solomon forward-error correction codes helps to overcome the bandwidth limitation of a remotely seeded reflective semiconductor optical amplifier (RSOA)-based ONU transmitter. The O-SCM architecture yields greater spectral efficiency and higher bit rates than many other SCM techniques while maintaining resilience to upstream impairments. We demonstrate full-duplex 5 Gb/s transmission over 20 km and analyze BER performance as a function of transmitted and received power. The architecture provides flexibility to network operators by relaxing common design constraints and enabling full-duplex operation at BER ∼ 10(-10) over a wide range of OLT launch powers from 3.5 to 8 dBm.

  17. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    Science.gov (United States)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  18. Using the TED Talks to Evaluate Spoken Post-editing of Machine Translation

    DEFF Research Database (Denmark)

    Liyanapathirana, Jeevanthi; Popescu-Belis, Andrei

    2016-01-01

    . To obtain a data set with spoken post-editing information, we use the French version of TED talks as the source texts submitted to MT, and the spoken English counterparts as their corrections, which are submitted to an ASR system. We experiment with various levels of artificial ASR noise and also...

  19. Data Requirements for the Correct Identification of Medication Errors and Adverse Drug Events in Patients Presenting at an Emergency Department.

    Science.gov (United States)

    Plank-Kiegele, Bettina; Bürkle, Thomas; Müller, Fabian; Patapovas, Andrius; Sonst, Anja; Pfistermeister, Barbara; Dormann, Harald; Maas, Renke

    2017-08-11

    Adverse drug events (ADE) involving or not involving medication errors (ME) are common, but frequently remain undetected as such. Presently, the majority of available clinical decision support systems (CDSS) relies mostly on coded medication data for the generation of drug alerts. It was the aim of our study to identify the key types of data required for the adequate detection and classification of adverse drug events (ADE) and medication errors (ME) in patients presenting at an emergency department (ED). As part of a prospective study, ADE and ME were identified in 1510 patients presenting at the ED of an university teaching hospital by an interdisciplinary panel of specialists in emergency medicine, clinical pharmacology and pharmacy. For each ADE and ME the required different clinical data sources (i.e. information items such as acute clinical symptoms, underlying diseases, laboratory values or ECG) for the detection and correct classification were evaluated. Of all 739 ADE identified 387 (52.4%), 298 (40.3%), 54 (7.3%), respectively, required one, two, or three, more information items to be detected and correctly classified. Only 68 (10.2%) of the ME were simple drug-drug interactions that could be identified based on medication data alone while 381 (57.5%), 181 (27.3%) and 33 (5.0%) of the ME required one, two or three additional information items, respectively, for detection and clinical classification. Only 10% of all ME observed in emergency patients could be identified on the basis of medication data alone. Focusing electronic decisions support on more easily available drug data alone may lead to an under-detection of clinically relevant ADE and ME.

  20. Stochastic error model corrections to improve the performance of bottom-up precipitation products for hydrologic applications

    Science.gov (United States)

    Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.

    2016-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for

  1. Non-topography-guided PRK combined with CXL for the correction of refractive errors in patients with early stage keratoconus.

    Science.gov (United States)

    Fadlallah, Ali; Dirani, Ali; Chelala, Elias; Antonios, Rafic; Cherfan, George; Jarade, Elias

    2014-10-01

    To evaluate the safety and clinical outcome of combined non-topography-guided photorefractive keratectomy (PRK) and corneal collagen cross-linking (CXL) for the treatment of mild refractive errors in patients with early stage keratoconus. A retrospective, nonrandomized study of patients with early stage keratoconus (stage 1 or 2) who underwent simultaneous non-topography-guided PRK and CXL. All patients had at least 2 years of follow-up. Data were collected preoperatively and postoperatively at the 6-month, 1-year, and 2-year follow-up visit after combined non-topography-guided PRK and CXL. Seventy-nine patients (140 eyes) were included in the study. Combined non-topography-guided PRK and CXL induced a significant improvement in both visual acuity and refraction. Uncorrected distance visual acuity significantly improved from 0.39 ± 0.22 logMAR before combined non-topography-guided PRK and CXL to 0.12 ± 0.14 logMAR at the last follow-up visit (P topography-guided PRK and CXL (P topography-guided PRK and CXL is an effective and safe option for correcting mild refractive error and improving visual acuity in patients with early stable keratoconus. Copyright 2014, SLACK Incorporated.

  2. Variations of OCT measurements corrected for the magnification effect according to axial length and refractive error in children

    Directory of Open Access Journals (Sweden)

    Inmaculada Bueno-Gimeno

    2018-01-01

    Full Text Available Purpose: The aim of this paper was to examine the distribution of macular, retinal nerve fiber layer (RNFL thickness and optic disc parameters of myopic and hyperopic eyes in comparison with emmetropic control eyes and to investigate their variation according to axial length (AL and spherical equivalent (SE in healthy children. Methods: This study included 293 pairs of eyes of 293 children (145 boys and 148 girls, ranging in age from 6 to 17 years. Subjects were divided according to SE in control (emmetropia, 99 children, myopia (100 children and hyperopia (94 children groups and according to axial AL in 68 short (25.00mm, 36. Macular parameters, RNFL thickness and optic disc morphology were assessed by the CirrusTM HD-OCT. AL was measured using the IOL-Master system. Littmann’s formula was used for calculating the corrected AL-related ocular magnification. Results: Mean age (±SD was 10.84±3.05 years; mean (±SD SE was +0.14±0.51 D (range from −8.75 to +8.25 D and mean AL (±SD was 23.12±1.49. Average RNFL thickness, average macular thickness and macular volume decreased as AL and myopia increased. No correlations between AL/SE and optic disc parameters were found after correcting for magnification effect. Conclusions: AL and refractive error affect measurements of macular and RNFL thickness in healthy children. To make a correct interpretation of OCT measurements, ocular magnification effect should be taken into account by clinicians or OCT manufacturers.

  3. Utility of spoken dialog systems

    CSIR Research Space (South Africa)

    Barnard, E

    2008-12-01

    Full Text Available The commercial successes of spoken dialog systems in the developed world provide encouragement for their use in the developing world, where speech could play a role in the dissemination of relevant information in local languages. We investigate...

  4. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  5. Corrections.

    Science.gov (United States)

    2016-02-01

    In the October In Our Unit article by Cooper et al, “Against All Odds: Preventing Pressure Ulcers in High-Risk Cardiac Surgery Patients” (Crit Care Nurse. 2015;35[5]:76–82), there was an error in the reference citation on page 82. At the top of that page, reference 18 cited on the second line should be reference 23, which also should be added to the References list: 23. AHRQ website. Prevention and treatment program integrates actionable reports into practice, significantly reducing pressure ulcers in nursing home residents. November 2008. https://innovations.ahrq.gov/profiles/prevention-and-treatment-program-integrates-actionable-reports-practice-significantly. Accessed November 18, 2015

  6. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  7. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  8. The required number of treatment imaging days for an effective off-line correction of systematic errors in conformal radiotherapy of prostate cancer -- a radiobiological analysis

    International Nuclear Information System (INIS)

    Amer, Ali M.; Mackay, Ranald I.; Roberts, Stephen A.; Hendry, Jolyon H.; Williams, Peter C.

    2001-01-01

    Background and purpose: To use radiobiological modelling to estimate the number of initial days of treatment imaging required to gain most of the benefit from off-line correction of systematic errors in the conformal radiation therapy of prostate cancer. Materials and methods: Treatment plans based on the anatomical information of a representative patient were generated assuming that the patient is treated with a multi leaf collimator (MLC) four-field technique and a total isocentre dose of 72 Gy delivered in 36 daily fractions. Target position variations between fractions were simulated from standard deviations of measured data found in the literature. Off-line correction of systematic errors was assumed to be performed only once based on the measured errors during the initial days of treatment. The tumour control probability (TCP) was calculated using the Webb and Nahum model. Results: Simulation of daily variations in the target position predicted a marked reduction in TCP if the planning target volume (PTV) margin was smaller than 4 mm (TCP decreased by 3.4% for 2 mm margin). The systematic components of target position variations had greater effect on the TCP than the random components. Off-line correction of estimated systematic errors reduced the decrease in TCP due to target daily displacements, nevertheless, the resulting TCP levels for small margins were still less than the TCP level obtained with the use of an adequate PTV margin of ∼10 mm. The magnitude of gain in TCP expected from the correction depended on the number of treatment imaging days used for the correction and the PTV margin applied. Gains of 2.5% in TCP were estimated from correction of systematic errors performed after 6 initial days of treatment imaging for a 2 mm PTV margin. The effect of various possible magnitudes of systematic and random components on the gain in TCP expected from correction and on the number of imaging days required was also investigated. Conclusions: Daily

  9. Error Correcting Codes

    Indian Academy of Sciences (India)

    syndrome is an indicator of underlying disease. Here too, a non zero syndrome is an indication that something has gone wrong during transmission. SERIES I ARTICLE. The first matrix on the left hand side is called the parity check matrix H. Thus every codeword c satisfies the equation o o. HcT = o o. Therefore the code can ...

  10. Error Correction of Loudspeakers

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde

    Throughout this thesis, the topic of electrodynamic loudspeaker unit design and modelling are reviewed. The research behind this project has been to study loudspeaker design, based on new possibilities introduced by including digital signal processing, and thereby achieving more freedom...... limitations of the DLA. Performance analysis of DLA is the name of the joint research project. Parts from this project are presented in this thesis....

  11. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    the reading of data from memory the receiving process. Protecting data in computer memories was one of the earliest applications of Hamming codes. We now describe the clever scheme invented by Hamming in 1948. To keep things simple, we describe the binary length 7 Hamming code. Encoding in the Hamming Code.

  12. Using GPS data to evaluate the accuracy of state-space methods for correction of Argos satellite telemetry error.

    Science.gov (United States)

    Patterson, Toey A; McConnell, Bernie J; Fedak, Mike A; Bravington, Mark V; Hindell, Mark A

    2010-01-01

    Recent studies have applied state-space models to satellite telemetry data in order to remove noise from raw location estimates and infer the true tracks of animals. However, while the resulting tracks may appear plausible, it is difficult to determine the accuracy of the estimated positions, especially for position estimates interpolated to times between satellite locations. In this study, we use data from two gray seals (Halichoerus grypus) carrying tags that transmitted Fastloc GPS positions via Argos satellites. This combination of Service Argos data and highly accurate GPS data allowed examination of the accuracy of state-space position estimates and their uncertainty derived from satellite telemetry data. After applying a speed filter to remove aberrant satellite telemetry locations, we fit a continuous-time Kalman filter to estimate the parameters of a random walk, used Kalman smoothing to infer positions at the times of the GPS measurements, and then compared the filtered telemetry estimates with the actual GPS measurements. We investigated the effect of varying maximum speed thresholds in the speed-filtering algorithm on the root mean-square error (RMSE) estimates and used minimum RMSE as a criterion to guide the final choice of speed threshold. The optimal speed thresholds differed between the two animals (1.1 m/s and 2.5 m/s) and retained 50% and 65% of the data for each seal. However, using a speed filter of 1.1 m/s resulted in very similar RMSE for both animals. For the two seals, the RMSE of the Kalman-filtered estimates of location were 5.9 and 12.76 km, respectively, and 75% of the modeled positions had errors less than 6.25 km and 11.7 km for each seal. Confidence interval coverage was close to correct at typical levels (80-95%), although it tended to be overly generous at smaller sizes. The reliability of uncertainty estimates was also affected by the chosen speed threshold. The combination of speed and Kalman filtering allows for effective

  13. Comparison of orthogonal kilovolt X-ray images and cone-beam CT matching results in setup error assessment and correction for EB-PBI during free breathing

    International Nuclear Information System (INIS)

    Wang Wei; Li Jianbin; Hu Hongguang; Ma Zhifang; Xu Min; Fan Tingyong; Shao Qian; Ding Yun

    2014-01-01

    Objective: To compare the differences in setup error (SE) assessment and correction between the orthogonal kilovolt X-ray images and CBCT in EB-PBI patients during free breathing. Methods: Nineteen patients after breast conserving surgery EB-PBI were recruited. Interfraction SE was acquired using orthogonal kilovolt X-ray setup images and CBCT, after on-line setup correction,calculate the residual error and compare the SE, residual error and setup margin (SM) quantified for orthogonal kilovolt X-ray images and CBCT. Wilcoxon sign-rank test was used to evaluate the differences. Results: The CBCT based SE (systematic error, ∑) was smaller than the orthogonal kilovolt X-ray images based ∑ in AP direction (-1.2 mm vs 2.00 mm; P=0.005), and there was no statistically significant differences for three dimensional directions in random error (σ) (P=0.948, 0.376, 0.314). After on-line setup correction,CBCT decreases setup residual error than the orthogonal kilovolt X-ray images in AP direction (Σ: -0.20 mm vs 0.50 mm, P=0.008; σ: 0.45 mm vs 1.34 mm, P=0.002). And also the CBCT based SM was smaller than orthogonal kilovolt X-ray images based SM in AP direction (Σ: -1.39 mm vs 5.57 mm, P=0.003; σ: 0.00 mm vs 3.2 mm, P=0.003). Conclusions: Compared with kilovolt X-ray images, CBCT underestimate the setup error in the AP direction, but decreases setup residual error significantly.An image-guided radiotherapy and setup error assessment using kilovolt X-ray images for EB-PBI plans was feasible. (authors)

  14. REAL STOCK PRICES AND THE LONG-RUN MONEY DEMAND FUNCTION IN MALAYSIA: Evidence from Error Correction Model

    Directory of Open Access Journals (Sweden)

    Naziruddin Abdullah

    2004-06-01

    Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.

  15. Performance Improvement of Membrane Stress Measurement Equipment through Evaluation of Added Mass of Membrane and Error Correction

    Directory of Open Access Journals (Sweden)

    Sang-Wook Jin

    2017-01-01

    Full Text Available One of the most important issues in keeping membrane structures in stable condition is to maintain the proper stress distribution over the membrane. However, it is difficult to determine the quantitative real stress level in the membrane after the completion of the structure. The stress relaxation phenomenon of the membrane and the fluttering effect due to strong wind or ponding caused by precipitation may cause severe damage to the membrane structure itself. Therefore, it is very important to know the magnitude of the existing stress in membrane structures for their maintenance. The authors have proposed a new method for separately estimating the membrane stress in two different directions using sound waves instead of directly measuring the membrane stress. The new method utilizes the resonance phenomenon of the membrane, which is induced by sound excitations given through an audio speaker. During such experiment, the effect of the surrounding air on the vibrating membrane cannot be overlooked in order to assure high measurement precision. In this paper, an evaluation scheme for the added mass of membrane with the effect of air on the vibrating membrane and the correction of measurement error is discussed. In addition, three types of membrane materials are used in the experiment in order to verify the expandability and accuracy of the membrane measurement equipment.

  16. Export Performance and Economic Growth in East Asian Economies – Application of Cointegration and Vector Error Correction Model

    Directory of Open Access Journals (Sweden)

    Neena Malhotra

    2016-11-01

    Full Text Available East Asian Economies are considered to be most successful economies in the world. Following the footsteps of other East Asian economies such as Japan and South Korea, China also shifted towards export-led growth strategy in 80s. This study analyzes the effect of export performance on economic growth of three major East Asian economies i.e. Japan, South Korea, and China. This study has conducted the econometric analysis of macro data under multivariate framework for the period 1980-2012. In order to examine the causal relationship between exports and economic growth, the study has applied time series techniques such as Augmented Dickey-Fuller (ADF and PhillipsPerron (PP unit root tests to check stationarity of variables, Johansen cointegration test for long run relationship, vector error correction model (VECM for short run dynamics and for estimating speed of adjustment towards long run equilibrium. The analysis also made use of techniques Impulse Response Function (IRF and Variance Decomposition Analysis (VDA to investigate the interrelationships within the system. The estimated results suggested that all variables were cointegrated for East Asian economies. The study concluded that export-led growth (ELG was only long run phenomenon in China and South Korea. The results for Japan supported growth led exports (GLE particularly for short run.

  17. Towards Adaptive Spoken Dialog Systems

    CERN Document Server

    Schmitt, Alexander

    2013-01-01

    In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and  accurate use.  Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted  recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...

  18. Comparison of Orbit-Based and Time-Offset-Based Geometric Correction Models for SAR Satellite Imagery Based on Error Simulation

    Directory of Open Access Journals (Sweden)

    Seunghwan Hong

    2017-01-01

    Full Text Available Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy.

  19. Exploring the Role Played by Error Correction and Models on Children's Reported Noticing and Output Production in a L2 Writing Task

    Science.gov (United States)

    Coyle, Yvette; Roca de Larios, Julio

    2014-01-01

    This article reports an empirical study in which we explored the role played by two forms of feedback--error correction and model texts--on child English as a foreign language learners' reported noticing and written output. The study was carried out with 11- and 12-year-old children placed in proficiency-matched pairs who engaged in a…

  20. Effects of nonlinear error correction of measurements obtained by peak flowmeter using the Wright scale to assess asthma attack severity in children

    Directory of Open Access Journals (Sweden)

    Stamatović Dragana

    2007-01-01

    Full Text Available Introduction: Monitoring of peak expiratory flow (PEF is recommended in numerous guidelines for management of asthma. Improvements in calibration methods have demonstrated the inaccuracy of original Wright scale of peak flowmeter. A new standard, EN 13826 that was applied to peak flowmeter was adopted on 1st September 2004 by some European countries. Correction of PEF readings obtained with old type devices for measurement is possible by Dr M. Miller’s original predictive equation. Objective. Assessment of PEF correction effect on the interpretation of measurement results and management decisions. Method. In children with intermittent (35 or stable persistent asthma (75 aged 6-16 years, there were performed 8393 measurements of PEF by Vitalograph normal-range peak flowmeter with traditional Wright scale. Readings were expressed as percentage of individual best values (PB before and after correction. The effect of correction was analyzed based on The British Thoracic Society guidelines for asthma attack treatment. Results. In general, correction reduced the values of PEF (p<0.01. The highest mean percentage error (20.70% in the measured values was found in the subgroup in which PB ranged between 250 and 350 l/min. Nevertheless, the interpretation of PEF after the correction in this subgroup changed in only 2.41% of measurements. The lowest mean percentage error (15.72%, and, at the same time, the highest effect of correction on measurement results interpretation (in 22.65% readings were in children with PB above 450 l/min. In 73 (66.37% subjects, the correction changed the clinical interpretation of some values of PEF after correction. In 13 (11.8% patients, some corrected values indicated the absence or a milder degree of airflow obstruction. In 27 (24.54% children, more than 10%, and in 12 (10.93%, more than 20% of the corrected readings indicated a severe degree of asthma exacerbation that needed more aggressive treatment. Conclusion

  1. Effect of Price Determinants on World Cocoa Prices for Over the Last Three Decades: Error Correction Model (ECM Approach

    Directory of Open Access Journals (Sweden)

    Lya Aklimawati

    2013-12-01

    Full Text Available High  volatility  cocoa  price  movement  is  consequenced  by  imbalancing between power demand and power supply in commodity market. World economy expectation and market  liberalization would lead to instability on cocoa prices in  the  international  commerce.  Dynamic  prices  moving  erratically  influence the benefit  of market players, particularly  producers. The aim of this research is  (1  to  estimate  the  empirical  cocoa  prices  model  for  responding  market dynamics and (2 analyze short-term and long-term effect of price determinants variables  on cocoa prices.  This research  was  carried out by  analyzing  annualdata from 1980 to 2011, based on secondary data. Error correction mechanism (ECM  approach was  used  to  estimate the  econometric  model  of  cocoa  price.The  estimation  results  indicated  that  cocoa  price  was  significantly  affected  by exchange rate IDR-USD, world gross domestic product,  world inflation, worldcocoa production, world cocoa consumption, world cocoa stock and Robusta prices at varied significance level from 1 - 10%. All of these variables have a long run equilibrium relationship. In long run effect, world gross domestic product, world  cocoa  consumption  and  world  cocoa  stock  were  elastic  (E  >1,  while other  variables  were  inelastic  (E  <1.  Variables  that  affecting  cocoa  pricesin  short  run  equilibrium  were  exchange  rate  IDR-USD,  world  gross  domestic product,  world  inflation,  world  cocoa  consumption  and  world  cocoa  stock. The  analysis  results  showed  that  world  gross  domestic  product,  world  cocoa consumption  and  world  cocoa  stock  were  elastic  (E  >1  to  cocoa  prices  in short-term.  Whereas,  the  response  of  cocoa  prices  was  inelastic  to  change  of exchange rate IDR-USD and world inflation.Key words: Price

  2. Structure analysis of tax revenue and inflation rate in Banda Aceh using vector error correction model with multiple alpha

    Science.gov (United States)

    Sofyan, Hizir; Maulia, Eva; Miftahuddin

    2017-11-01

    A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).

  3. The Effectiveness of Explicit and Implicit Corrective Feedback on Interlingual and Intralingual Errors: A Case of Error Analysis of Students' Compositions

    Science.gov (United States)

    Falhasiri, Mohammad; Tavakoli, Mansoor; Hasiri, Fatemeh; Mohammadzadeh, Ali Reza

    2011-01-01

    This study intends to shed light on the most occurring grammatical and lexical (pragmatic) errors which students make in their compositions. For this purpose, 23 male and female undergraduate students from different majors were asked to take part in the present study. Each week, for four weeks, students were asked to write 4 compositions on…

  4. Recognizing Young Readers' Spoken Questions

    Science.gov (United States)

    Chen, Wei; Mostow, Jack; Aist, Gregory

    2013-01-01

    Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…

  5. Correlative Conjunctions in Spoken Texts

    Czech Academy of Sciences Publication Activity Database

    Poukarová, Petra

    2017-01-01

    Roč. 68, č. 2 (2017), s. 305-315 ISSN 0021-5597 R&D Projects: GA ČR GA15-01116S Institutional support: RVO:68378092 Keywords : correlative conjunctions * spoken Czech * cohesion Subject RIV: AI - Linguistics OBOR OECD: Linguistics http://www.juls.savba.sk/ediela/jc/2017/2/jc17-02.pdf

  6. Systeme des fautes et correction phonetique des Anglais qui apprennent le francais (System of Errors and Phonetic Correction of English Speakers Learning French)

    Science.gov (United States)

    Lebrun, Claire

    1975-01-01

    This article provides the results of an investigation of the system of pronunciation faults of English speakers learning French, and describes means of correction based on the verbo-tonal system. (Text is in French.) (CLK)

  7. MO-F-CAMPUS-T-05: Correct Or Not to Correct for Rotational Patient Set-Up Errors in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Briscoe, M; Ploquin, N; Voroney, JP [University of Calgary, Tom Baker Cancer Centre, Calgary, AB (Canada)

    2015-06-15

    Purpose: To quantify the effect of patient rotation in stereotactic radiation therapy and establish a threshold where rotational patient set-up errors have a significant impact on target coverage. Methods: To simulate rotational patient set-up errors, a Matlab code was created to rotate the patient dose distribution around the treatment isocentre, located centrally in the lesion, while keeping the structure contours in the original locations on the CT and MRI. Rotations of 1°, 3°, and 5° for each of the pitch, roll, and yaw, as well as simultaneous rotations of 1°, 3°, and 5° around all three axes were applied to two types of brain lesions: brain metastasis and acoustic neuroma. In order to analyze multiple tumour shapes, these plans included small spherical (metastasis), elliptical (acoustic neuroma), and large irregular (metastasis) tumour structures. Dose-volume histograms and planning target volumes were compared between the planned patient positions and those with simulated rotational set-up errors. The RTOG conformity index for patient rotation was also investigated. Results: Examining the tumour volumes that received 80% of the prescription dose in the planned and rotated patient positions showed decreases in prescription dose coverage of up to 2.3%. Conformity indices for treatments with simulated rotational errors showed decreases of up to 3% compared to the original plan. For irregular lesions, degradation of 1% of the target coverage can be seen for rotations as low as 3°. Conclusions: This data shows that for elliptical or spherical targets, rotational patient set-up errors less than 3° around any or all axes do not have a significant impact on the dose delivered to the target volume or the conformity index of the plan. However the same rotational errors would have an impact on plans for irregular tumours.

  8. Comparison of vector autoregressive (VAR) and vector error correction models (VECM) for index of ASEAN stock price

    Science.gov (United States)

    Suharsono, Agus; Aziza, Auliya; Pramesti, Wara

    2017-12-01

    Capital markets can be an indicator of the development of a country's economy. The presence of capital markets also encourages investors to trade; therefore investors need information and knowledge of which shares are better. One way of making decisions for short-term investments is the need for modeling to forecast stock prices in the period to come. Issue of stock market-stock integration ASEAN is very important. The problem is that ASEAN does not have much time to implement one market in the economy, so it would be very interesting if there is evidence whether the capital market in the ASEAN region, especially the countries of Indonesia, Malaysia, Philippines, Singapore and Thailand deserve to be integrated or still segmented. Furthermore, it should also be known and proven What kind of integration is happening: what A capital market affects only the market Other capital, or a capital market only Influenced by other capital markets, or a Capital market as well as affecting as well Influenced by other capital markets in one ASEAN region. In this study, it will compare forecasting of Indonesian share price (IHSG) with neighboring countries (ASEAN) including developed and developing countries such as Malaysia (KLSE), Singapore (SGE), Thailand (SETI), Philippines (PSE) to find out which stock country the most superior and influential. These countries are the founders of ASEAN and share price index owners who have close relations with Indonesia in terms of trade, especially exports and imports. Stock price modeling in this research is using multivariate time series analysis that is VAR (Vector Autoregressive) and VECM (Vector Error Correction Modeling). VAR and VECM models not only predict more than one variable but also can see the interrelations between variables with each other. If the assumption of white noise is not met in the VAR modeling, then the cause can be assumed that there is an outlier. With this modeling will be able to know the pattern of relationship

  9. Attenuation correction of myocardial SPECT images with X-ray CT. Effects of registration errors between X-ray CT and SPECT

    International Nuclear Information System (INIS)

    Takahashi, Yasuyuki; Murase, Kenya; Mochizuki, Teruhito; Motomura, Nobutoku

    2002-01-01

    Attenuation correction with an X-ray CT image is a new method to correct attenuation on SPECT imaging, but the effect of the registration errors between CT and SPECT images is unclear. In this study, we investigated the effects of the registration errors on myocardial SPECT, analyzing data from a phantom and a human volunteer. Registerion (fusion) of the X-ray CT and SPECT images was done with standard packaged software in three dimensional fashion, by using linked transaxial, coronal and sagittal images. In the phantom study, and X-ray CT image was shifted 1 to 3 pixels on the x, y and z axes, and rotated 6 degrees clockwise. Attenuation correction maps generated from each misaligned X-ray CT image were used to reconstruct misaligned SPECT images of the phantom filled with 201 Tl. In a human volunteer, X-ray CT was acquired in different conditions (during inspiration vs. expiration). CT values were transferred to an attenuation constant by using straight lines; an attenuation constant of 0/cm in the air (CT value=-1,000 HU) and that of 0.150/cm in water (CT value=0 HU). For comparison, attenuation correction with transmission CT (TCT) data and an external γ-ray source ( 99m Tc) was also applied to reconstruct SPECT images. Simulated breast attenuation with a breast attachment, and inferior wall attenuation were properly corrected by means of the attenuation correction map generated from X-ray CT. As pixel shift increased, deviation of the SPECT images increased in misaligned images in the phantom study. In the human study, SPECT images were affected by the scan conditions of the X-ray CT. Attenuation correction of myocardial SPECT with an X-ray CT image is a simple and potentially beneficial method for clinical use, but accurate registration of the X-ray CT to SPECT image is essential for satisfactory attenuation correction. (author)

  10. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    International Nuclear Information System (INIS)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.

    2012-01-01

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II–IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  11. Real-time correction of rigid body motion-induced phase errors for diffusion-weighted steady-state free precession imaging.

    Science.gov (United States)

    O'Halloran, Rafael; Aksoy, Murat; Aboussouan, Eric; Peterson, Eric; Van, Anh; Bammer, Roland

    2015-02-01

    Diffusion contrast in diffusion-weighted steady-state free precession magnetic resonance imaging (MRI) is generated through the constructive addition of signal from many coherence pathways. Motion-induced phase causes destructive interference which results in loss of signal magnitude and diffusion contrast. In this work, a three-dimensional (3D) navigator-based real-time correction of the rigid body motion-induced phase errors is developed for diffusion-weighted steady-state free precession MRI. The efficacy of the real-time prospective correction method in preserving phase coherence of the steady state is tested in 3D phantom experiments and 3D scans of healthy human subjects. In nearly all experiments, the signal magnitude in images obtained with proposed prospective correction was higher than the signal magnitude in images obtained with no correction. In the human subjects, the mean magnitude signal in the data was up to 30% higher with prospective motion correction than without. Prospective correction never resulted in a decrease in mean signal magnitude in either the data or in the images. The proposed prospective motion correction method is shown to preserve the phase coherence of the steady state in diffusion-weighted steady-state free precession MRI, thus mitigating signal magnitude losses that would confound the desired diffusion contrast. © 2014 Wiley Periodicals, Inc.

  12. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    Science.gov (United States)

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase

  13. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  14. Simulación de un esquema de un esquema de fec (forward error correction) en base al estandar dvb (digital video broadcasting)

    OpenAIRE

    Moscoso Alvarado, Jaime Armando; Medina Moreira, Washington Adolfo

    2009-01-01

    The wireless communication medium requires employing forward error correction methods on the data transferred, where Reed-Solomon & Viterbi coding techniques are generally utilized, because of performance and security reason. In this paper we present a modular design of phase encoding these codes for concatenation using System Generator of Xilinx and oriented to implementation with field programmable gate arrays (FPGA). The work begins with a review of code concept and the definition of the c...

  15. DESIGN OF A LOW-POWER AND HIGH THROUGHPUT ERROR DETECTION AND CORRECTION CIRCUIT USING THE 4T EX-OR METHOD

    Directory of Open Access Journals (Sweden)

    S. KAVITHA

    2017-08-01

    Full Text Available This paper describes an efficient implementation of an error correction circuit based on single error detection and correction with check bit pre-computation. The core component of the proposed 4-bit EX-OR circuit was designed using the CMOS cascade method. This paper presents a 4-input EX-OR gate that was developed from a 2-input EX-OR gate using the bit slice method. The proposed architecture retains the modified Error Correction Code (ECC circuit. The proposed 4-input EX-OR gate and its auxiliary components such as AND, MUX and D Flip-Flop were schematized using the DSCH tool and the layouts was analysed using the BSIM4 analyser. The simulation results were obtained and compared with the performance of existing circuits. LVS verification was performed on the modified ECC circuit at CMOS 70 nm feature size and its corresponding voltage of 0.7V. The modified ECC circuit simulation results were analysed and compared with the performance of existing circuits in terms of propagation delay, power dissipation, area, latency, and throughput. The proposed ECC circuit showed an improved performance with existing circuit low power dissipation (94.41% and high throughput (95.20%.

  16. Autofocus Correction of Azimuth Phase Error and Residual Range Cell Migration in Spotlight SAR Polar Format Imagery

    OpenAIRE

    Mao, Xinhua; Zhu, Daiyin; Zhu, Zhaoda

    2012-01-01

    Synthetic aperture radar (SAR) images are often blurred by phase perturbations induced by uncompensated sensor motion and /or unknown propagation effects caused by turbulent media. To get refocused images, autofocus proves to be useful post-processing technique applied to estimate and compensate the unknown phase errors. However, a severe drawback of the conventional autofocus algorithms is that they are only capable of removing one-dimensional azimuth phase errors (APE). As the resolution be...

  17. CONVERTING RETRIEVED SPOKEN DOCUMENTS INTO TEXT USING AN AUTO ASSOCIATIVE NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2016-06-01

    Full Text Available This paper frames a novel methodology for spoken document information retrieval to the spontaneous speech corpora and converting the retrieved document into the corresponding language text. The proposed work involves the three major areas namely spoken keyword detection, spoken document retrieval and automatic speech recognition. The keyword spotting is concerned with the exploit of the distribution capturing capability of the Auto Associative Neural Network (AANN for spoken keyword detection. It involves sliding a frame-based keyword template along the audio documents and by means of confidence score acquired from the normalized squared error of AANN to search for a match. This work benevolences a new spoken keyword spotting algorithm. Based on the match the spoken documents are retrieved and clustered together. In speech recognition step, the retrieved documents are converted into the corresponding language text using the AANN classifier. The experiments are conducted using the Dravidian language database and the results recommend that the proposed method is promising for retrieving the relevant documents of a spoken query as a key and transform it into the corresponding language.

  18. Introducing Spoken Dialogue Systems into Intelligent Environments

    CERN Document Server

    Heinroth, Tobias

    2013-01-01

    Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...

  19. Cost-effectiveness of screening and correcting refractive errors in school children in Africa, Asia, America and Europe.

    NARCIS (Netherlands)

    Baltussen, R.M.P.M.; Naus, J.; Limburg, H.

    2009-01-01

    OBJECTIVE: To estimate the costs and effects of alternative strategies for annual screening of school children for refractive errors, and the provision of spectacles, in different WHO sub-regions in Africa, Asia, America and Europe. METHODS: We developed a mathematical simulation model for

  20. Correcting for binomial measurement error in predictors in regression with application to analysis of DNA methylation rates by bisulfite sequencing.

    Science.gov (United States)

    Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal

    2016-09-30

    Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.