WorldWideScience

Sample records for error correction protocol

  1. Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol

    International Nuclear Information System (INIS)

    Horoshko, D B

    2007-01-01

    The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)

  2. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  3. Unitary Application of the Quantum Error Correction Codes

    International Nuclear Information System (INIS)

    You Bo; Xu Ke; Wu Xiaohua

    2012-01-01

    For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.

  4. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  5. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  6. A modified error correction protocol for CCITT signalling system no. 7 on satellite links

    Science.gov (United States)

    Kreuer, Dieter; Quernheim, Ulrich

    1991-10-01

    Comite Consultatif International des Telegraphe et Telephone (CCITT) Signalling System No. 7 (SS7) provides a level 2 error correction protocol particularly suited for links with propagation delays higher than 15 ms. Not being originally designed for satellite links, however, the so called Preventive Cyclic Retransmission (PCR) Method only performs well on satellite channels when traffic is low. A modified level 2 error control protocol, termed Fix Delay Retransmission (FDR) method is suggested which performs better at high loads, thus providing a more efficient use of the limited carrier capacity. Both the PCR and the FDR methods are investigated by means of simulation and results concerning throughput, queueing delay, and system delay, respectively. The FDR method exhibits higher capacity and shorter delay than the PCR method.

  7. Correct mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme on ping-pong protocol

    OpenAIRE

    Zhang, Zhanjun

    2004-01-01

    Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected

  8. Quantum error correction of continuous-variable states against Gaussian noise

    Energy Technology Data Exchange (ETDEWEB)

    Ralph, T. C. [Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072 (Australia)

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  9. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  10. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  11. Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction

    Directory of Open Access Journals (Sweden)

    Chi-Huang Shih

    2012-01-01

    Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.

  12. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  13. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  14. Topics in quantum cryptography, quantum error correction, and channel simulation

    Science.gov (United States)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel

  15. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  16. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  17. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  18. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  19. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  20. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  1. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  2. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  3. A two-shift optimisation of the 'no action level' setup correction protocol

    International Nuclear Information System (INIS)

    Fox, C.; Fisher, R.

    2004-01-01

    Full text: As electronic portal imaging equipment becomes more common, many radiotherapy centres now have the ability to collect patient treatment position deviation values. One commonly used off-line set-up correction protocol for calculating patient setup corrections is the 'no action level' (NAL) protocol. This paper proposes a two-shift approach and calculates the number of images required for minimum systematic error. Patient data is used in a simulation to confirm this approach. Patient treatment position deviations were available for all treatment sessions for a large group of patients undergoing radiation therapy for prostate. Thirty of these patients were selected. The patient position at treatment and all isocentre shifts made were recorded in the treatment notes. These were used to simulate the effect of the NAL protocol using a range of image numbers as the basis of the set-up correction. As Bortfeld et al noted, there is an error minimum that can be observed beyond which the mean radial systematic set-up error increases slowly with an increase in the number of images used. An enhancement to the NAL was proposed in which the patient's position is corrected on two occasions; once early in the treatment schedule, and again after more images have been collected. The expectation value of the set-up error for this two-shift NAL was found and minimised. The optimum staging for the two-shift NAL for the prostate patients was to image for a total of 9 sessions and to shift the patient after 3 sessions and 9 sessions. The thirty patients showed an uncorrected mean radial setup error of 0.65cm. In this simulation this was corrected to 0.26cm by application of the NAL using 5 images and to 0.17 cm using the two shift NAL with shifts after three and nine images. In situations where staff can manage the workload of collecting and analysing portal images for nine sessions for each patient, the two-shift NAL will result in a high level of set-up accuracy. Copyright

  4. Effectiveness of couch height-based patient set-up and an off-line correction protocol in prostate cancer radiotherapy

    International Nuclear Information System (INIS)

    Lin, Emile N.J.Th. van; Nijenhuis, Edwin; Huizenga, Henk; Vight, Lisette van der; Visser, Andries

    2001-01-01

    Purpose: To investigate set-up improvement caused by applying a couch height-based patient set-up method in combination with a technologist-driven off-line correction protocol in nonimmobilized radiotherapy of prostate patients. Methods and Materials: A three-dimensional shrinking action level correction protocol is applied in two consecutive patient cohorts with different set-up methods: the traditional 'laser set-up' group (n=43) and the 'couch height set-up' group (n=112). For all directions, left-right, ventro-dorsal, and cranio-caudal, random and systematic set-up deviations were measured. Results: The couch height set-up method improves the patient positioning compared to the laser set-up method. Without application of the correction protocol, both systematic and random errors reduced to 2.2-2.4 mm (1 SD) and 1.7-2.2 mm (1 SD), respectively. By using the correction protocol, systematic errors reduced further to 1.3-1.6 mm (1 SD). One-dimensional deviations were within 5 mm for >90% of the measured fractions. The required number of corrections per patient in the off-line correction protocol was reduced significantly during the course of treatment from 1.1 to 0.6 by the couch height set-up method. The treatment time was not prolonged by application of the correction protocol. Conclusions: The couch height set-up method improves the set-up significantly, especially in the ventro-dorsal direction. Combination of this set-up method with an off-line correction strategy, executed by technologists, reduces the number of set-up corrections required

  5. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  7. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  8. Corrections to "Connectivity-Based Reliable Multicast MAC Protocol for IEEE 802.11 Wireless LANs"

    Directory of Open Access Journals (Sweden)

    Choi Woo-Yong

    2010-01-01

    Full Text Available We have found the errors in the throughput formulae presented in our paper "Connectivity-based reliable multicast MAC protocol for IEEE 802.11 wireless LANs". We provide the corrected formulae and numerical results.

  9. Haptic Data Processing for Teleoperation Systems: Prediction, Compression and Error Correction

    OpenAIRE

    Lee, Jae-young

    2013-01-01

    This thesis explores haptic data processing methods for teleoperation systems, including prediction, compression, and error correction. In the proposed haptic data prediction method, unreliable network conditions, such as time-varying delay and packet loss, are detected by a transport layer protocol. Given the information from the transport layer, a Bayesian approach is introduced to predict position and force data in haptic teleoperation systems. Stability of the proposed method within stoch...

  10. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  11. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  12. Opportunistic Error Correction for WLAN Applications

    NARCIS (Netherlands)

    Shao, X.; Schiphorst, Roelof; Slump, Cornelis H.

    2008-01-01

    The current error correction layer of IEEE 802.11a WLAN is designed for worst case scenarios, which often do not apply. In this paper, we propose a new opportunistic error correction layer based on Fountain codes and a resolution adaptive ADC. The key part in the new proposed system is that only

  13. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  14. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  15. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  16. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  17. New decoding methods of interleaved burst error-correcting codes

    Science.gov (United States)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  18. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  19. Errors in preparation and administration of parenteral drugs in neonatology: evaluation and corrective actions.

    Science.gov (United States)

    Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra

    2016-12-01

    The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.

  20. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  1. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  2. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  3. Time-dependent phase error correction using digital waveform synthesis

    Science.gov (United States)

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  4. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  5. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  6. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  7. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  8. Digital correction of magnification in pelvic x rays for preoperative planning of hip joint replacements: Theoretical development and clinical results of a new protocol

    International Nuclear Information System (INIS)

    The, B.; Diercks, R.L.; Stewart, R.E.; Ooijen, P.M.A. van; Horn, J.R. van

    2005-01-01

    The introduction of digital radiological facilities leads to the necessity of digital preoperative planning, which is an essential part of joint replacement surgery. To avoid errors in the preparation and execution of hip surgery, reliable correction of the magnification of the projected hip is a prerequisite. So far, no validated method exists to accomplish this. We present validated geometrical models of the x-ray projection of spheres, relevant for the calibration procedure to correct for the radiographic magnification. With help of these models a new calibration protocol was developed. The validity and precision of this procedure was determined in clinical practice. Magnification factors could be predicted with a maximal margin of error of 1.5%. The new calibration protocol is valid and reliable. The clinical tests revealed that correction of magnification has a 95% margin of error of -3% to +3%. Future research might clarify if a strict calibration protocol, as presented in this study, results in more accurate preoperative planning of hip joint replacements

  9. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  10. THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ketut Santi Indriani

    2015-05-01

    Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.

  11. Reed-Solomon error-correction as a software patch mechanism.

    Energy Technology Data Exchange (ETDEWEB)

    Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-11-01

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  12. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  13. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  14. Joint Schemes for Physical Layer Security and Error Correction

    Science.gov (United States)

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  15. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  16. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  17. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  18. Latency correction of event-related potentials between different experimental protocols

    Science.gov (United States)

    Iturrate, I.; Chavarriaga, R.; Montesano, L.; Minguez, J.; Millán, JdR

    2014-06-01

    Objective. A fundamental issue in EEG event-related potentials (ERPs) studies is the amount of data required to have an accurate ERP model. This also impacts the time required to train a classifier for a brain-computer interface (BCI). This issue is mainly due to the poor signal-to-noise ratio and the large fluctuations of the EEG caused by several sources of variability. One of these sources is directly related to the experimental protocol or application designed, and may affect the amplitude or latency of ERPs. This usually prevents BCI classifiers from generalizing among different experimental protocols. In this paper, we analyze the effect of the amplitude and the latency variations among different experimental protocols based on the same type of ERP. Approach. We present a method to analyze and compensate for the latency variations in BCI applications. The algorithm has been tested on two widely used ERPs (P300 and observation error potentials), in three experimental protocols in each case. We report the ERP analysis and single-trial classification. Main results. The results obtained show that the designed experimental protocols significantly affect the latency of the recorded potentials but not the amplitudes. Significance. These results show how the use of latency-corrected data can be used to generalize the BCIs, reducing the calibration time when facing a new experimental protocol.

  19. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  20. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  1. Entanglement renormalization, quantum error correction, and bulk causality

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2017-04-07

    Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.

  2. Volterra Filtering for ADC Error Correction

    Directory of Open Access Journals (Sweden)

    J. Saliga

    2001-09-01

    Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.

  3. Crosstalk error correction through dynamical decoupling of single-qubit gates in capacitively coupled singlet-triplet semiconductor spin qubits

    Science.gov (United States)

    Buterakos, Donovan; Throckmorton, Robert E.; Das Sarma, S.

    2018-01-01

    In addition to magnetic field and electric charge noise adversely affecting spin-qubit operations, performing single-qubit gates on one of multiple coupled singlet-triplet qubits presents a new challenge: crosstalk, which is inevitable (and must be minimized) in any multiqubit quantum computing architecture. We develop a set of dynamically corrected pulse sequences that are designed to cancel the effects of both types of noise (i.e., field and charge) as well as crosstalk to leading order, and provide parameters for these corrected sequences for all 24 of the single-qubit Clifford gates. We then provide an estimate of the error as a function of the noise and capacitive coupling to compare the fidelity of our corrected gates to their uncorrected versions. Dynamical error correction protocols presented in this work are important for the next generation of singlet-triplet qubit devices where coupling among many qubits will become relevant.

  4. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  5. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  6. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  7. A correctness proof of the bakery protocol in $ mu $CRL

    NARCIS (Netherlands)

    J.F. Groote (Jan Friso); H.P. Korver

    1994-01-01

    textabstractA specification of a bakery protocol is given in $mu$CRL. We provide a simple correctness criterion for the protocol. Then the protocol is proven correct using a proof system that has been developed for $mu$CRL. The proof primarily consists of algebraic manipulations based on

  8. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  9. Energy efficiency of error correction on wireless systems

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    1999-01-01

    Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.

  10. Software for Correcting the Dynamic Error of Force Transducers

    Directory of Open Access Journals (Sweden)

    Naoki Miyashita

    2014-07-01

    Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.

  11. A Correctness Proof of the Bakery Protocol in μCRL

    NARCIS (Netherlands)

    Groote, J.F.; Korver, H.

    1992-01-01

    A specification of the bakery protocol is given in μCRL. We provide a simple correctness criterion for the protocol. Then the protocol is proven correct using a proof system that has been developed for μCRL. The proof primarily consists of algebraic manipulations based on specifications of

  12. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  13. Autonomous Quantum Error Correction with Application to Quantum Metrology

    Science.gov (United States)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  14. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  15. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  16. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  17. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    International Nuclear Information System (INIS)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.

    2012-01-01

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II–IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  18. Error correcting circuit design with carbon nanotube field effect transistors

    Science.gov (United States)

    Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong

    2018-03-01

    In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.

  19. Error-correcting pairs for a public-key cryptosystem

    International Nuclear Information System (INIS)

    Pellikaan, Ruud; Márquez-Corbella, Irene

    2017-01-01

    Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t -bounded decoding algorithms which is achieved in the case the code has a t -error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t -ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t -error correcting pair. (paper)

  20. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  1. Quantum Error Correction and Fault Tolerant Quantum Computing

    CERN Document Server

    Gaitan, Frank

    2008-01-01

    It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo

  2. Opportunistic error correction for mimo-ofdm: from theory to practice

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    Opportunistic error correction based on fountain codes is especially designed for the MIMOOFDM system. The key point of this new method is the tradeoff between the code rate of error correcting codes and the number of sub-carriers in the channel vector to be discarded. By transmitting one

  3. The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate

    Science.gov (United States)

    Polio, Charlene

    2012-01-01

    The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…

  4. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  5. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  6. Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring

    Energy Technology Data Exchange (ETDEWEB)

    Bunch, S.C.; Holmes, J.

    2004-01-01

    We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.

  7. Black Holes, Holography, and Quantum Error Correction

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions?  How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator?  Why do such things happen only in gravitational theories?  In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence.  No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.  

  8. ecco: An error correcting comparator theory.

    Science.gov (United States)

    Ghirlanda, Stefano

    2018-03-08

    Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Fault-tolerant quantum computing in the Pauli or Clifford frame with slow error diagnostics

    Directory of Open Access Journals (Sweden)

    Christopher Chamberland

    2018-01-01

    Full Text Available We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.

  10. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  11. Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

    Directory of Open Access Journals (Sweden)

    Patrick SAINT-DIZIER

    2015-12-01

    Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

  12. Spatially coupled low-density parity-check error correction for holographic data storage

    Science.gov (United States)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  13. Correcting for particle counting bias error in turbulent flow

    Science.gov (United States)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  14. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  15. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.

  16. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  17. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  18. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  19. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  20. Environment-assisted error correction of single-qubit phase damping

    International Nuclear Information System (INIS)

    Trendelkamp-Schroer, Benjamin; Helm, Julius; Strunz, Walter T.

    2011-01-01

    Open quantum system dynamics of random unitary type may in principle be fully undone. Closely following the scheme of environment-assisted error correction proposed by Gregoratti and Werner [J. Mod. Opt. 50, 915 (2003)], we explicitly carry out all steps needed to invert a phase-damping error on a single qubit. Furthermore, we extend the scheme to a mixed-state environment. Surprisingly, we find cases for which the uncorrected state is closer to the desired state than any of the corrected ones.

  1. Improving transcriptome assembly through error correction of high-throughput sequence reads

    Directory of Open Access Journals (Sweden)

    Matthew D. MacManes

    2013-07-01

    Full Text Available The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.

  2. Gold price effect on stock market: A Markov switching vector error correction approach

    Science.gov (United States)

    Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok

    2014-06-01

    Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.

  3. Detection and correction of prescription errors by an emergency department pharmacy service.

    Science.gov (United States)

    Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald

    2014-05-01

    Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.

  4. ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES

    Directory of Open Access Journals (Sweden)

    Maria Corazon Saturnina A Castro

    2017-10-01

    Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices.  This paper poses the major problem:  How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed.  Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.

  5. Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them

    Science.gov (United States)

    Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.

    2011-01-01

    Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…

  6. Entanglement and Quantum Error Correction with Superconducting Qubits

    Science.gov (United States)

    Reed, Matthew

    2015-03-01

    Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.

  7. Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness

    International Nuclear Information System (INIS)

    Park, Jong-Kyu; Schaffer, Michael J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathan E.

    2011-01-01

    Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.

  8. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Directory of Open Access Journals (Sweden)

    Huiliang Cao

    2016-01-01

    Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  9. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  10. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-01

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455

  11. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections.

    Science.gov (United States)

    Greer, P B; Dahl, K; Ebert, M A; Wratten, C; White, M; Denham, J W

    2008-10-01

    The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres and a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 11-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless implanted fiducial markers are available for treatment guidance.

  12. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    Science.gov (United States)

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  13. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  14. Atmospheric Error Correction of the Laser Beam Ranging

    Directory of Open Access Journals (Sweden)

    J. Saydi

    2014-01-01

    Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.

  15. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    Science.gov (United States)

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  16. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  17. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  19. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  20. Experimental quantum error correction with high fidelity

    International Nuclear Information System (INIS)

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-01-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ε to ∼ε 2 . In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  1. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  2. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  3. Online versus offline corrections: opposition or evolution? A comparison of two electronic portal imaging approaches for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Middleton, Mark; Medwell, Steve; Wong, Jacky; Lynton-Moll, Mary; Rolfo, Aldo; See Andrew; Joon, Michael Lim

    2006-01-01

    Given the onset of dose escalation and increased planning target volume (PTV) conformity, the requirement of accurate field placement has also increased. This study compares and contrasts a combination offline/online electronic portal imaging (EPI) device correction with a complete online correction protocol and assesses their relative effectiveness in managing set-up error. Field placement data was collected on patients receiving radical radiotherapy to the prostate. Ten patients were on an initial combination offline/online correction protocol, followed by another 10 patients on a complete online correction protocol. Analysis of 1480 portal images from 20 patients was carried out, illustrating that a combination offline/online approach can be very effective in dealing with the systematic component of set-up error, but it is only when a complete online correction protocol is employed that both systematic and random set-up errors can be managed. Now, EPI protocols have evolved considerably and online corrections are a highly effective tool in the quest for more accurate field placement. This study discusses the clinical workload impact issues that need to be addressed in order for an online correction protocol to be employed, and addresses many of the practical issues that need to be resolved. Management of set-up error is paramount when seeking to dose escalate and only an online correction protocol can manage both components of set-up error. Both systematic and random errors are important and can be effectively and efficiently managed

  4. Set-up improvement in head and neck radiotherapy using a 3D off-line EPID-based correction protocol and a customised head and neck support

    International Nuclear Information System (INIS)

    Lin, Emile N.J. Th. van; Vight, Lisette van der; Huizenga, Henk; Kaanders, Johannes H.A.M.; Visser, Andries G.

    2003-01-01

    Purpose: First, to investigate the set-up improvement resulting from the introduction of a customised head and neck (HN) support system in combination with a technologist-driven off-line correction protocol in HN radiotherapy. Second, to define margins for planning target volume definition, accounting for systematic and random set-up uncertainties. Methods and materials: In 63 patients 498 treatment fractions were evaluated to develop and implement a 3D shrinking action level correction protocol. In the comparative study two different HN-supports were compared: a flexible 'standard HN-support' and a 'customised HN-support'. For all three directions (x, y and z) random and systematic set-up deviations (1 S.D.) were measured. Results: The customised HN-support improves the patient positioning compared to the standard HN-support. The 1D systematic errors in the x, y and z directions were reduced from 2.2-2.3 mm to 1.2-2.0 mm (1 S.D.). The 1D random errors for the y and z directions were reduced from 1.6 and 1.6 mm to 1.1 and 1.0 mm (1 S.D.). The correction protocol reduced the 1D systematic errors further to 0.8-1.1 mm (1 S.D.) and all deviations in any direction were within 5 mm. Treatment time per measured fraction was increased from 10 to 13 min. The total time required per patient, for the complete correction procedure, was approximately 40 min. Conclusions: Portal imaging is a powerful tool in the evaluation of the department specific patient positioning procedures. The introduction of a comfortable customised HN-support, in combination with an electronic portal imaging device-based correction protocol, executed by technologists, led to an improvement of overall patient set-up. As a result, application of proposed recipes for CTV-PTV margins indicates that these can be reduced to 3-4 mm

  5. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  6. FMLRC: Hybrid long read error correction using an FM-index.

    Science.gov (United States)

    Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D

    2018-02-09

    Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.

  7. Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors

    Directory of Open Access Journals (Sweden)

    Pham Thuy Dung

    2016-12-01

    Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners

  8. Bias correction of bounded location errors in presence-only data

    Science.gov (United States)

    Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.

    2017-01-01

    Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.

  9. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  10. Biometrics encryption combining palmprint with two-layer error correction codes

    Science.gov (United States)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  11. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  12. Neural Network Based Real-time Correction of Transducer Dynamic Errors

    Science.gov (United States)

    Roj, J.

    2013-12-01

    In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.

  13. Tension-Induced Error Correction and Not Kinetochore Attachment Status Activates the SAC in an Aurora-B/C-Dependent Manner in Oocytes.

    Science.gov (United States)

    Vallot, Antoine; Leontiou, Ioanna; Cladière, Damien; El Yakoubi, Warif; Bolte, Susanne; Buffin, Eulalie; Wassmann, Katja

    2018-01-08

    Cell division with partitioning of the genetic material should take place only when paired chromosomes named bivalents (meiosis I) or sister chromatids (mitosis and meiosis II) are correctly attached to the bipolar spindle in a tension-generating manner. For this to happen, the spindle assembly checkpoint (SAC) checks whether unattached kinetochores are present, in which case anaphase onset is delayed to permit further establishment of attachments. Additionally, microtubules are stabilized when they are attached and under tension. In mitosis, attachments not under tension activate the so-named error correction pathway depending on Aurora B kinase substrate phosphorylation. This leads to microtubule detachments, which in turn activates the SAC [1-3]. Meiotic divisions in mammalian oocytes are highly error prone, with severe consequences for fertility and health of the offspring [4, 5]. Correct attachment of chromosomes in meiosis I leads to the generation of stretched bivalents, but-unlike mitosis-not to tension between sister kinetochores, which co-orient. Here, we set out to address whether reduction of tension applied by the spindle on bioriented bivalents activates error correction and, as a consequence, the SAC. Treatment of oocytes in late prometaphase I with Eg5 kinesin inhibitor affects spindle tension, but not attachments, as we show here using an optimized protocol for confocal imaging. After Eg5 inhibition, bivalents are correctly aligned but less stretched, and as a result, Aurora-B/C-dependent error correction with microtubule detachment takes place. This loss of attachments leads to SAC activation. Crucially, SAC activation itself does not require Aurora B/C kinase activity in oocytes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Quantum Information Processing and Quantum Error Correction An Engineering Approach

    CERN Document Server

    Djordjevic, Ivan

    2012-01-01

    Quantum Information Processing and Quantum Error Correction is a self-contained, tutorial-based introduction to quantum information, quantum computation, and quantum error-correction. Assuming no knowledge of quantum mechanics and written at an intuitive level suitable for the engineer, the book gives all the essential principles needed to design and implement quantum electronic and photonic circuits. Numerous examples from a wide area of application are given to show how the principles can be implemented in practice. This book is ideal for the electronics, photonics and computer engineer

  15. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  16. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  17. A Case for Soft Error Detection and Correction in Computational Chemistry.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  18. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  19. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections

    International Nuclear Information System (INIS)

    Greer, P. B.; Dahl, K.; Ebert, M. A.; Wratten, C.; White, M.; Denham, K. W.

    2008-01-01

    Full text: The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres land a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-I posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-I based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 111-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin I recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic (errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless I implanted fiducial markers are available for treatment guidance.

  20. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  1. [Incidence of refractive errors with corrective aids subsequent selection].

    Science.gov (United States)

    Benes, P; Synek, S; Petrová, S; Sokolová, Sidlová J; Forýtková, L; Holoubková, Z

    2012-02-01

    This study follows the occurrence of refractive errors in population and the possible selection of the appropriate type of corrective aids. Objective measurement and subsequent determination of the subjective refraction of the eye is on essential act in opotmetric practice. The file represented by 615 patients (1230 eyes) is divided according to the refractive error of myopia, hyperopia and as a control group are listed emetropic clients. The results of objective and subjective values of refraction are compared and statistically processed. The study included 615 respondents. To determine the objective refraction the autorefraktokeratometer with Placido disc was used and the values of spherical and astigmatic correction components, including the axis were recorded. These measurements were subsequently verified and tested subjectively using the trial lenses and the projection optotype to the normal investigative distance of 5 meters. After this the appropriate corrective aids were then recommended. Group I consists of 123 men and 195 women with myopia (n = 635) of clients with an average age 39 +/- 18,9 years. Objective refraction - sphere: -2,57 +/- 2,46 D, cylinder: -1,1 +/- 1,01 D, axis of: 100 degrees +/- 53,16 degrees. Subjective results are as follows--the value of sphere: -2,28 +/- 2,33 D, cylinder -0,63 +/- 0,80 D, axis of: 99,8 degrees +/- 56,64 degrees. Group II is represented hyperopic clients and consists of 67 men and 107 women (n = 348). The average age is 58,84 +/- 16,73 years. Objective refraction has values - sphere: +2,81 +/- 2,21 D, cylinder: -1,0 +/- 0,94 D; axis 95 degree +/- 45,4 degrees. Subsequent determination of subjective refraction has the following results - sphere: +2,28 +/- 2,06 D; cylinder: -0,49 +/- 0,85 D, axis of: 95,9 degrees +/- 46,4 degrees. Group III consists from emetropes whose final minimum viasual acuity was Vmin = 1,0 (5/5) or better. Overall, this control group is represented 52 males and 71 females (n = 247). The average

  2. NxRepair: error correction in de novo sequence assembly using Nextera mate pairs

    Directory of Open Access Journals (Sweden)

    Rebecca R. Murphy

    2015-06-01

    Full Text Available Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.

  3. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    Science.gov (United States)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  4. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    Science.gov (United States)

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  5. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  6. Energy Efficient Error-Correcting Coding for Wireless Systems

    NARCIS (Netherlands)

    Shao, X.

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required

  7. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    Science.gov (United States)

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase

  8. Tensor Networks and Quantum Error Correction

    Science.gov (United States)

    Ferris, Andrew J.; Poulin, David

    2014-07-01

    We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.

  9. Remote one-qubit information concentration and decoding of operator quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsu Liyi

    2007-01-01

    We propose the general scheme of remote one-qubit information concentration. To achieve the task, the Bell-correlated mixed states are exploited. In addition, the nonremote one-qubit information concentration is equivalent to the decoding of the quantum error-correction code. Here we propose how to decode the stabilizer codes. In particular, the proposed scheme can be used for the operator quantum error-correction codes. The encoded state can be recreated on the errorless qubit, regardless how many bit-flip errors and phase-flip errors have occurred

  10. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  11. CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD

    Directory of Open Access Journals (Sweden)

    BUSUIOCEANU STELIANA

    2013-08-01

    Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.

  12. Short-term wind power combined forecasting based on error forecast correction

    International Nuclear Information System (INIS)

    Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng

    2016-01-01

    Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed

  13. A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction

    Science.gov (United States)

    Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole

    2015-01-01

    Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…

  14. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  15. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  16. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  17. Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.

    Science.gov (United States)

    Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A

    2017-05-01

    Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.

  18. How EFL students can use Google to correct their “untreatable” written errors

    Directory of Open Access Journals (Sweden)

    Luc Geiller

    2014-09-01

    Full Text Available This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several “untreatable” written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback leads to more grammatical accuracy. In her response to Truscott (1996, Ferris (1999 explains that it would be unreasonable to abolish correction given the present state of knowledge, and that further research needed to focus on which types of errors were more amenable to which types of error correction. In her attempt to respond more effectively to her students’ errors, she made the distinction between “treatable” and “untreatable” ones: the former occur in “a patterned, rule-governed way” and include problems with verb tense or form, subject-verb agreement, run-ons, noun endings, articles, pronouns, while the latter include a variety of lexical errors, problems with word order and sentence structure, including missing and unnecessary words. Substantial research on the use of search engines as a tool for L2 learners has been carried out suggesting that the web plays an important role in fostering language awareness and learner autonomy (e.g. Shei 2008a, 2008b; Conroy 2010. According to Bathia and Richie (2009: 547, “the application of Google for language learning has just begun to be tapped.” Within the framework of this study it was assumed that the students, conversant with digital technologies and using Google and the web on a regular basis, could use various search options and the search results to self-correct their errors instead of relying on their teacher to provide direct feedback. After receiving some in-class training on how to formulate Google queries, the students were asked to use a customized Google search engine limiting searches to 28 information websites to correct up to

  19. Linear transceiver design for nonorthogonal amplify-and-forward protocol using a bit error rate criterion

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2014-04-01

    The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol. © 2002-2012 IEEE.

  20. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  1. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...

  2. Neural network decoder for quantum error correcting codes

    Science.gov (United States)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  3. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  4. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  5. Upper bounds on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2004-01-01

    We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...

  6. Information-theoretic security proof for quantum-key-distribution protocols

    International Nuclear Information System (INIS)

    Renner, Renato; Gisin, Nicolas; Kraus, Barbara

    2005-01-01

    We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel

  7. Information-theoretic security proof for quantum-key-distribution protocols

    Science.gov (United States)

    Renner, Renato; Gisin, Nicolas; Kraus, Barbara

    2005-07-01

    We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.

  8. Evaluation of image-guidance protocols in the treatment of head and neck cancers

    International Nuclear Information System (INIS)

    Zeidan, Omar A.; Langen, Katja M.; Meeks, Sanford L.; Manon, Rafael R.; Wagner, Thomas H.; Willoughby, Twyla R.; Jenkins, D. Wayne; Kupelian, Patrick A.

    2007-01-01

    Purpose: The aim of this study was to assess the residual setup error of different image-guidance (IG) protocols in the alignment of patients with head and neck cancer. The protocols differ in the percentage of treatment fractions that are associated with image guidance. Using data from patients who were treated with daily IG, the residual setup errors for several different protocols are retrospectively calculated. Methods and Materials: Alignment data from 24 patients (802 fractions) treated with daily IG on a helical tomotherapy unit were analyzed. The difference between the daily setup correction and the setup correction that would have been made according to a specific protocol was used to calculate the residual setup errors for each protocol. Results: The different protocols are generally effective in reducing systematic setup errors. Random setup errors are generally not reduced for fractions that are not image guided. As a consequence, if every other treatment is image guided, still about 11% of all treatments (IG and not IG) are subject to three-dimensional setup errors of at least 5 mm. This frequency increases to about 29% if setup errors >3 mm are scored. For various protocols that require 15% to 31% of the treatments to be image guided, from 50% to 60% and from 26% to 31% of all fractions are subject to setup errors >3 mm and >5 mm, respectively. Conclusion: Residual setup errors reduce with increasing frequency of IG during the course of external-beam radiotherapy for head-and-neck cancer patients. The inability to reduce random setup errors for fractions that are not image guided results in notable residual setup errors

  9. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  10. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  11. Links between N-modular redundancy and the theory of error-correcting codes

    Science.gov (United States)

    Bobin, V.; Whitaker, S.; Maki, G.

    1992-01-01

    N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.

  12. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  13. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  14. Correction of Cadastral Error: Either the Right or Obligation of the Person Concerned?

    Directory of Open Access Journals (Sweden)

    Magdenko A. Y.

    2014-07-01

    Full Text Available The article is devoted to the institute of cadastral error. Some questions and problems of cadastral error corrections are considered. The material is based on current legislation and judicial practice.

  15. High-speed parallel forward error correction for optical transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....

  16. A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy

    International Nuclear Information System (INIS)

    Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock

    2005-01-01

    An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle

  17. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  18. A median filter approach for correcting errors in a vector field

    Science.gov (United States)

    Schultz, H.

    1985-01-01

    Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.

  19. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  20. Error analysis of motion correction method for laser scanning of moving objects

    Science.gov (United States)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  1. Setup error in radiotherapy: on-line correction using electronic kilovoltage and megavoltage radiographs

    International Nuclear Information System (INIS)

    Pisani, Laura; Lockman, David; Jaffray, David; Yan Di; Martinez, Alvaro; Wong, John

    2000-01-01

    Purpose: We hypothesize that the difference in image quality between the traditional kilovoltage (kV) prescription radiographs and megavoltage (MV) treatment radiographs is a major factor hindering our ability to accurately measure, thus correct, setup error in radiation therapy. The objective of this work is to study the accuracy of on-line correction of setup errors achievable using either kV- or MV-localization (i.e., open-field) radiographs. Methods and Materials: Using a gantry mounted kV and MV dual-beam imaging system, the accuracy of on-line measurement and correction of setup error using electronic kV- and MV-localization images was examined based on anthropomorphic phantom and patient imaging studies. For the phantom study, the user's ability to accurately detect known translational shifts was analyzed. The clinical study included 14 patients with disease in the head and neck, thoracic, and pelvic regions. For each patient, 4 orthogonal kV radiographs acquired during treatment simulation from the right lateral, anterior-to-posterior, left lateral, and posterior-to-anterior directions were employed as reference prescription images. Two-dimensional (2D) anatomic templates were defined on each of the 4 reference images. On each treatment day, after positioning the patient for treatment, 4 orthogonal electronic localization images were acquired with both kV and 6-MV photon beams. On alternate weeks, setup errors were determined from either the kV- or MV-localization images but not both. Setup error was determined by aligning each 2D template with the anatomic information on the corresponding localization image, ignoring rotational and nonrigid variations. For each set of 4 orthogonal images, the results from template alignments were averaged. Based on the results from the phantom study and a parallel study of the inter- and intraobserver template alignment variability, a threshold for minimum correction was set at 2 mm in any direction. Setup correction was

  2. Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints

    Directory of Open Access Journals (Sweden)

    Fanqi Meng

    2017-12-01

    Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.

  3. A comparison of the use of bony anatomy and internal markers for offline verification and an evaluation of the potential benefit of online and offline verification protocols for prostate radiotherapy.

    Science.gov (United States)

    McNair, Helen A; Hansen, Vibeke N; Parker, Christopher C; Evans, Phil M; Norman, Andrew; Miles, Elizabeth; Harris, Emma J; Del-Acroix, Louise; Smith, Elizabeth; Keane, Richard; Khoo, Vincent S; Thompson, Alan C; Dearnaley, David P

    2008-05-01

    To evaluate the utility of intraprostatic markers in the treatment verification of prostate cancer radiotherapy. Specific aims were: to compare the effectiveness of offline correction protocols, either using gold markers or bony anatomy; to estimate the potential benefit of online correction protocol's using gold markers; to determine the presence and effect of intrafraction motion. Thirty patients with three gold markers inserted had pretreatment and posttreatment images acquired and were treated using an offline correction protocol and gold markers. Retrospectively, an offline protocol was applied using bony anatomy and an online protocol using gold markers. The systematic errors were reduced from 1.3, 1.9, and 2.5 mm to 1.1, 1.1, and 1.5 mm in the right-left (RL), superoinferior (SI), and anteroposterior (AP) directions, respectively, using the offline correction protocol and gold markers instead of bony anatomy. The subsequent decrease in margins was 1.7, 3.3, and 4 mm in the RL, SI, and AP directions, respectively. An offline correction protocol combined with an online correction protocol in the first four fractions reduced random errors further to 0.9, 1.1, and 1.0 mm in the RL, SI, and AP directions, respectively. A daily online protocol reduced all errors to markers is effective in reducing the systematic error. The value of online protocols is reduced by intrafraction motion.

  4. Reduction of determinate errors in mass bias-corrected isotope ratios measured using a multi-collector plasma mass spectrometer

    International Nuclear Information System (INIS)

    Doherty, W.

    2015-01-01

    A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer

  5. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  6. Students' Preferences and Attitude toward Oral Error Correction Techniques at Yanbu University College, Saudi Arabia

    Science.gov (United States)

    Alamri, Bushra; Fawzi, Hala Hassan

    2016-01-01

    Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…

  7. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  8. Errors in imaging patients in the emergency setting.

    Science.gov (United States)

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.

  9. Image enhancement by spectral-error correction for dual-energy computed tomography.

    Science.gov (United States)

    Park, Kyung-Kook; Oh, Chang-Hyun; Akay, Metin

    2011-01-01

    Dual-energy CT (DECT) was reintroduced recently to use the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between low and high energy images or measurements, so that it is difficult to acquire accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, an image enhancement technique for DECT is proposed, based on the fact that the attenuation of a higher density material decreases more rapidly as X-ray energy increases. We define as spectral error the case when a pixel pair of low and high energy images deviates far from the expected attenuation trend. After analyzing the spectral-error sources of DECT images, we propose a DECT image enhancement method, which consists of three steps: water-reference offset correction, spectral-error correction, and anti-correlated noise reduction. It is the main idea of this work that makes spectral errors distributed like random noise over the true attenuation and suppressed by the well-known anti-correlated noise reduction. The proposed method suppressed noise of liver lesions and improved contrast between liver lesions and liver parenchyma in DECT contrast-enhanced abdominal images and their two-material decomposition.

  10. Decrease in medical command errors with use of a "standing orders" protocol system.

    Science.gov (United States)

    Holliman, C J; Wuerz, R C; Meador, S A

    1994-05-01

    The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Neural network error correction for solving coupled ordinary differential equations

    Science.gov (United States)

    Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.

    1992-01-01

    A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.

  12. Practical Use of the Extended No Action Level (eNAL) Correction Protocol for Breast Cancer Patients With Implanted Surgical Clips

    International Nuclear Information System (INIS)

    Penninkhof, Joan; Quint, Sandra; Baaijens, Margreet; Heijmen, Ben; Dirkx, Maarten

    2012-01-01

    Purpose: To describe the practical use of the extended No Action Level (eNAL) setup correction protocol for breast cancer patients with surgical clips and evaluate its impact on the setup accuracy of both tumor bed and whole breast during simultaneously integrated boost treatments. Methods and Materials: For 80 patients, two orthogonal planar kilovoltage images and one megavoltage image (for the mediolateral beam) were acquired per fraction throughout the radiotherapy course. For setup correction, the eNAL protocol was applied, based on registration of surgical clips in the lumpectomy cavity. Differences with respect to application of a No Action Level (NAL) protocol or no protocol were quantified for tumor bed and whole breast. The correlation between clip migration during the fractionated treatment and either the method of surgery or the time elapsed from last surgery was investigated. Results: The distance of the clips to their center of mass (COM), averaged over all clips and patients, was reduced by 0.9 ± 1.2 mm (mean ± 1 SD). Clip migration was similar between the group of patients starting treatment within 100 days after surgery (median, 53 days) and the group starting afterward (median, 163 days) (p = 0.20). Clip migration after conventional breast surgery (closing the breast superficially) or after lumpectomy with partial breast reconstructive techniques (sutured cavity). was not significantly different either (p = 0.22). Application of eNAL on clips resulted in residual systematic errors for the clips’ COM of less than 1 mm in each direction, whereas the setup of the breast was within about 2 mm of accuracy. Conclusions: Surgical clips can be safely used for high-accuracy position verification and correction. Given compensation for time trends in the clips’ COM throughout the treatment course, eNAL resulted in better setup accuracies for both tumor bed and whole breast than NAL.

  13. A Comparison of the Use of Bony Anatomy and Internal Markers for Offline Verification and an Evaluation of the Potential Benefit of Online and Offline Verification Protocols for Prostate Radiotherapy

    International Nuclear Information System (INIS)

    McNair, Helen A.; Hansen, Vibeke N.; Parker, Christopher; Evans, Phil M.; Norman, Andrew; Miles, Elizabeth; Harris, Emma J.; Del-Acroix, Louise; Smith, Elizabeth; Keane, Richard; Khoo, Vincent S.; Thompson, Alan C.; Dearnaley, David P.

    2008-01-01

    Purpose: To evaluate the utility of intraprostatic markers in the treatment verification of prostate cancer radiotherapy. Specific aims were: to compare the effectiveness of offline correction protocols, either using gold markers or bony anatomy; to estimate the potential benefit of online correction protocol's using gold markers; to determine the presence and effect of intrafraction motion. Methods and Materials: Thirty patients with three gold markers inserted had pretreatment and posttreatment images acquired and were treated using an offline correction protocol and gold markers. Retrospectively, an offline protocol was applied using bony anatomy and an online protocol using gold markers. Results: The systematic errors were reduced from 1.3, 1.9, and 2.5 mm to 1.1, 1.1, and 1.5 mm in the right-left (RL), superoinferior (SI), and anteroposterior (AP) directions, respectively, using the offline correction protocol and gold markers instead of bony anatomy. The subsequent decrease in margins was 1.7, 3.3, and 4 mm in the RL, SI, and AP directions, respectively. An offline correction protocol combined with an online correction protocol in the first four fractions reduced random errors further to 0.9, 1.1, and 1.0 mm in the RL, SI, and AP directions, respectively. A daily online protocol reduced all errors to <1 mm. Intrafraction motion had greater impact on the effectiveness of the online protocol than the offline protocols. Conclusions: An offline protocol using gold markers is effective in reducing the systematic error. The value of online protocols is reduced by intrafraction motion

  14. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  15. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  16. Fringe order error in multifrequency fringe projection phase unwrapping: reason and correction.

    Science.gov (United States)

    Zhang, Chunwei; Zhao, Hong; Zhang, Lu

    2015-11-10

    A multifrequency fringe projection phase unwrapping algorithm (MFPPUA) is important to fringe projection profilometry, especially when a discontinuous object is measured. However, a fringe order error (FOE) may occur when MFPPUA is adopted. An FOE will result in error to the unwrapped phase. Although this kind of phase error does not spread, it brings error to the eventual 3D measurement results. Therefore, an FOE or its adverse influence should be obviated. In this paper, reasons for the occurrence of an FOE are theoretically analyzed and experimentally explored. Methods to correct the phase error caused by an FOE are proposed. Experimental results demonstrate that the proposed methods are valid in eliminating the adverse influence of an FOE.

  17. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  18. Justifications of policy-error correction: a case study of error correction in the Three Mile Island Nuclear Power Plant Accident

    International Nuclear Information System (INIS)

    Kim, Y.P.

    1982-01-01

    The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions

  19. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  20. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2013-01-01

    In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...

  1. Distance error correction for time-of-flight cameras

    Science.gov (United States)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  2. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  3. Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.

    Science.gov (United States)

    Song, Li; Florea, Liliana

    2015-01-01

    Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.

  4. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    Science.gov (United States)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  5. Evaluation criteria for communications-related corrective action plans

    International Nuclear Information System (INIS)

    1997-02-01

    This document provides guidance and criteria for US Nuclear Regulatory Commission (NRC) personnel to use in evaluating corrective action plans for nuclear power plant communications. The document begins by describing the purpose, scope, and applicability of the evaluation criteria. Next, it presents background information concerning the communication process, root causes of communication errors, and development and implementation of corrective actions. The document then defines specific criteria for evaluating the effectiveness of the corrective action plan, interview protocols, and an observation protocol related to communication processes. This document is intended only as guidance. It is not intended to have the effect of a regulation, and it does not establish any binding requirements or interpretations of NRC regulations

  6. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  7. BLESS 2: accurate, memory-efficient and fast error correction method.

    Science.gov (United States)

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Time Error Analysis of SOE System Using Network Time Protocol

    International Nuclear Information System (INIS)

    Keum, Jong Yong; Park, Geun Ok; Park, Heui Youn

    2005-01-01

    To find the accuracy of time in the fully digitalized SOE (Sequence of Events) system, we used a formal specification of the Network Time Protocol (NTP) Version 3, which is used to synchronize time keeping among a set of distributed computers. Through constructing a simple experimental environments and experimenting internet time synchronization, we analyzed the time errors of local clocks of SOE system synchronized with a time server via computer networks

  9. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    Science.gov (United States)

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  10. Evaluation of positioning errors of the patient using cone beam CT megavoltage; Evaluacion de errores de posicionamiento del paciente mediante Cone Beam CT de megavoltaje

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-07-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  11. Metrological Array of Cyber-Physical Systems. Part 7. Additive Error Correction for Measuring Instrument

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-06-01

    Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.

  12. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  13. Role of humidity and other correction factors in the AAPM TG-21 dosimetry protocol

    International Nuclear Information System (INIS)

    Rogers, D.W.; Ross, C.K.

    1988-01-01

    A detailed derivation is presented of the formulas required to determine Ngas and Dmed in the AAPM TG-21 dosimetry protocol. This protocol specifies how to determine the absorbed dose in an electron or photon beam when using exposure or absorbed dose calibrated ion chambers. It is shown that the expression given in TG-21's recent letter of clarification is incorrect. Accounting for humidity correctly increases, by 0.4%, all absorbed dose determinations using an exposure calibrated ion chamber. Taking into account other correction factors in the equation for exposure could also have varying, but significant effects (possibly over 1%). These are the stem scatter correction, the axial nonuniformity correction and the electrode correction for electrodes made of different materials from the wall. Attention is drawn to differences in the definitions of the exposure and absorbed dose calibration factors, Nx and ND, respectively, as supplied by the NBS and the NRCC

  14. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  15. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  16. Correction of clock errors in seismic data using noise cross-correlations

    Science.gov (United States)

    Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline

    2017-04-01

    Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock

  17. Correction for dynamic bias error in transmission measurements of void fraction

    International Nuclear Information System (INIS)

    Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.

    2012-01-01

    Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.

  18. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Herzog, Hans

    2013-01-01

    Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled

  19. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  20. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    Science.gov (United States)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal

  1. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    Science.gov (United States)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    , and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  2. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    Directory of Open Access Journals (Sweden)

    Guo Xiao-Mao

    2010-10-01

    Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10

  3. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    International Nuclear Information System (INIS)

    Cai, Gang; Hu, Wei-Gang; Chen, Jia-Yi; Yu, Xiao-Li; Pan, Zi-Qiang; Yang, Zhao-Zhi; Guo, Xiao-Mao; Shao, Zhi-Min; Jiang, Guo-Liang

    2010-01-01

    The cone beam CT (CBCT) guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF) errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT) guided accelerated partial breast irradiation (APBI). Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR), 3.1 mm and 2.3 mm in the superior-inferior (SI), and 2.3 mm and 2.0 mm in the anterior-posterior (AP) directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10.1 mm and 12.7 mm in the AP direction

  4. Design of nanophotonic circuits for autonomous subsystem quantum error correction

    Energy Technology Data Exchange (ETDEWEB)

    Kerckhoff, J; Pavlichin, D S; Chalabi, H; Mabuchi, H, E-mail: jkerc@stanford.edu [Edward L Ginzton Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2011-05-15

    We reapply our approach to designing nanophotonic quantum memories in order to formulate an optical network that autonomously protects a single logical qubit against arbitrary single-qubit errors. Emulating the nine-qubit Bacon-Shor subsystem code, the network replaces the traditionally discrete syndrome measurement and correction steps by continuous, time-independent optical interactions and coherent feedback of unitarily processed optical fields.

  5. Forecasting the price of gold: An error correction approach

    Directory of Open Access Journals (Sweden)

    Kausik Gangopadhyay

    2016-03-01

    Full Text Available Gold prices in the Indian market may be influenced by a multitude of factors such as the value of gold in investment decisions, as an inflation hedge, and in consumption motives. We develop a model to explain and forecast gold prices in India, using a vector error correction model. We identify investment decision and inflation hedge as prime movers of the data. We also present out-of-sample forecasts of our model and the related properties.

  6. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  7. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably......This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  8. Novel Ontologies-based Optical Character Recognition-error Correction Cooperating with Graph Component Extraction

    Directory of Open Access Journals (Sweden)

    Sarunya Kanjanawattana

    2017-01-01

    Full Text Available literature. Extracting graph information clearly contributes to readers, who are interested in graph information interpretation, because we can obtain significant information presenting in the graph. A typical tool used to transform image-based characters to computer editable characters is optical character recognition (OCR. Unfortunately, OCR cannot guarantee perfect results, because it is sensitive to noise and input quality. This becomes a serious problem because misrecognition provides misunderstanding information to readers and causes misleading communication. In this study, we present a novel method for OCR-error correction based on bar graphs using semantics, such as ontologies and dependency parsing. Moreover, we used a graph component extraction proposed in our previous study to omit irrelevant parts from graph components. It was applied to clean and prepare input data for this OCR-error correction. The main objectives of this paper are to extract significant information from the graph using OCR and to correct OCR errors using semantics. As a result, our method provided remarkable performance with the highest accuracies and F-measures. Moreover, we examined that our input data contained less of noise because of an efficiency of our graph component extraction. Based on the evidence, we conclude that our solution to the OCR problem achieves the objectives.

  9. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  10. Cause of depth error of borehole logging and its correction

    International Nuclear Information System (INIS)

    Iida, Yoshimasa; Ikeda, Koki; Tsuruta, Tadahiko; Ito, Hiroaki; Goto, Junichi.

    1996-01-01

    Data by borehole logging can be used for detailed analysis of geological structures. Depths measured by portable borehole loggers commonly shift a few meters on the level of 400 to 500 meters deep. Therefore, the cause of depth error has to be recognized to make proper corrections for detailed structural analysis. Correlation between depths of drill core and in-rod radiometric logging has been performed in detail on exploration drill holes in the Athabasca basin, Canada. As a result, a common tendency of logging depth shift has been recognized, and an empirical formula (quadratic equation) for this has been obtained. The physical meaning of the formula and the cause of the depth error has been considered. (author)

  11. Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage

    Directory of Open Access Journals (Sweden)

    Juha Partala

    2017-01-01

    Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.

  12. PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    David Kaluge

    2017-03-01

    Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.

  13. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  14. Error characterization and quantum control benchmarking in liquid state NMR using quantum information processing techniques

    Science.gov (United States)

    Laforest, Martin

    Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for

  15. HyDEn: A Hybrid Steganocryptographic Approach for Data Encryption Using Randomized Error-Correcting DNA Codes

    Directory of Open Access Journals (Sweden)

    Dan Tulpan

    2013-01-01

    Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.

  16. Daily online bony correction is required for prostate patients without fiducial markers or soft-tissue imaging.

    Science.gov (United States)

    Johnston, M L; Vial, P; Wiltshire, K L; Bell, L J; Blome, S; Kerestes, Z; Morgan, G W; O'Driscoll, D; Shakespeare, T P; Eade, T N

    2011-09-01

    To compare online position verification strategies with offline correction protocols for patients undergoing definitive prostate radiotherapy. We analysed 50 patients with implanted fiducial markers undergoing curative prostate radiation treatment, all of whom underwent daily kilovoltage imaging using an on-board imager. For each treatment, patients were set-up initially with skin tattoos and in-room lasers. Orthogonal on-board imager images were acquired and the couch shift to match both bony anatomy and the fiducial markers recorded. The set-up error using skin tattoos and offline bone correction was compared with online bone correction. The fiducial markers were used as the reference. Data from 1923 fractions were analysed. The systematic error was ≤1 mm for all protocols. The average random error was 2-3mm for online bony correction and 3-5mm for skin tattoos or offline-bone. Online-bone showed a significant improvement compared with offline-bone in the number of patients with >5mm set-up errors for >10% (P20% (Pmarkers or daily soft-tissue imaging. Copyright © 2011 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  17. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  18. Retesting the Limits of Data-Driven Learning: Feedback and Error Correction

    Science.gov (United States)

    Crosthwaite, Peter

    2017-01-01

    An increasing number of studies have looked at the value of corpus-based data-driven learning (DDL) for second language (L2) written error correction, with generally positive results. However, a potential conundrum for language teachers involved in the process is how to provide feedback on students' written production for DDL. The study looks at…

  19. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  20. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  1. A new controller for the JET error field correction coils

    International Nuclear Information System (INIS)

    Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.

    2005-01-01

    This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios

  2. Evaluation of positioning errors of the patient using cone beam CT megavoltage

    International Nuclear Information System (INIS)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-01-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  3. Error field and its correction strategy in tokamaks

    International Nuclear Information System (INIS)

    In, Yongkyoon

    2014-01-01

    While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)

  4. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    Science.gov (United States)

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  5. Error correcting code with chip kill capability and power saving enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY

    2011-08-30

    A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.

  6. A two-dimensional matrix correction for off-axis portal dose prediction errors

    International Nuclear Information System (INIS)

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-01-01

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As

  7. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  8. Phase correction and error estimation in InSAR time series analysis

    Science.gov (United States)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same

  9. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    International Nuclear Information System (INIS)

    Kim, Isaac H.

    2011-01-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  10. Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice

    Science.gov (United States)

    Kim, Isaac H.

    2011-05-01

    We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.

  11. IMPACT OF TRADE OPENNESS ON OUTPUT GROWTH: CO INTEGRATION AND ERROR CORRECTION MODEL APPROACH

    Directory of Open Access Journals (Sweden)

    Asma Arif

    2012-01-01

    Full Text Available This study analyzed the long run relationship between trade openness and output growth for Pakistan using annual time series data for 1972-2010. This study follows the Engle and Granger co integration analysis and error correction approach to analyze the long run relationship between the two variables. The Error Correction Term (ECT for output growth and trade openness is significant at 5% level of significance and indicates a positive long run relation between the variables. This study has also analyzed the causality between trade openness and output growth by using granger causality test. The results of granger causality show that there is a bi-directional significant relationship between trade openness and economic growth.

  12. Likelihood-based inference for cointegration with nonlinear error-correction

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders Christian

    2010-01-01

    We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...

  13. Measurement error of a simplified protocol for quantitative sensory tests in chronic pain patients

    DEFF Research Database (Denmark)

    Müller, Monika; Biurrun Manresa, José; Limacher, Andreas

    2017-01-01

    BACKGROUND AND OBJECTIVES: Large-scale application of Quantitative Sensory Tests (QST) is impaired by lacking standardized testing protocols. One unclear methodological aspect is the number of records needed to minimize measurement error. Traditionally, measurements are repeated 3 to 5 times...

  14. An Analysis of Error Reconciliation Protocols for use in Quantum Key Distribution

    Science.gov (United States)

    2012-02-01

    INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN // CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR...of the messages passed, and that the time to prepare or separate the message information is negligible . Finally, for this experiment all errors...of interactions becomes negligible . In fact, of the three protocols, experiments performed here have shown that Winnow produces the highest average

  15. Piggyback intraocular lens implantation to correct pseudophakic refractive error after segmental multifocal intraocular lens implantation.

    Science.gov (United States)

    Venter, Jan A; Oberholster, Andre; Schallhorn, Steven C; Pelouskova, Martina

    2014-04-01

    To evaluate refractive and visual outcomes of secondary piggyback intraocular lens implantation in patients diagnosed as having residual ametropia following segmental multifocal lens implantation. Data of 80 pseudophakic eyes with ametropia that underwent Sulcoflex aspheric 653L intraocular lens implantation (Rayner Intraocular Lenses Ltd., East Sussex, United Kingdom) to correct residual refractive error were analyzed. All eyes previously had in-the-bag zonal refractive multifocal intraocular lens implantation (Lentis Mplus MF30, models LS-312 and LS-313; Oculentis GmbH, Berlin, Germany) and required residual refractive error correction. Outcome measurements included uncorrected distance visual acuity, corrected distance visual acuity, uncorrected near visual acuity, distance-corrected near visual acuity, manifest refraction, and complications. One-year data are presented in this study. The mean spherical equivalent ranged from -1.75 to +3.25 diopters (D) preoperatively (mean: +0.58 ± 1.15 D) and reduced to -1.25 to +0.50 D (mean: -0.14 ± 0.28 D; P < .01). Postoperatively, 93.8% of eyes were within ±0.50 D and 98.8% were within ±1.00 D of emmetropia. The mean uncorrected distance visual acuity improved significantly from 0.28 ± 0.16 to 0.01 ± 0.10 logMAR and 78.8% of eyes achieved 6/6 (Snellen 20/20) or better postoperatively. The mean uncorrected near visual acuity changed from 0.43 ± 0.28 to 0.19 ± 0.15 logMAR. There was no significant change in corrected distance visual acuity or distance-corrected near visual acuity. No serious intraoperative or postoperative complications requiring secondary intraocular lens removal occurred. Sulcoflex lenses proved to be a predictable and safe option for correcting residual refractive error in patients diagnosed as having pseudophakia. Copyright 2014, SLACK Incorporated.

  16. Random access to mobile networks with advanced error correction

    Science.gov (United States)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  17. Bias correction for selecting the minimal-error classifier from many machine learning models.

    Science.gov (United States)

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. In-memory interconnect protocol configuration registers

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Kevin Y.; Roberts, David A.

    2017-09-19

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mapping decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.

  19. In-memory interconnect protocol configuration registers

    Science.gov (United States)

    Cheng, Kevin Y.; Roberts, David A.

    2017-09-19

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mapping decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.

  20. Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel

    Directory of Open Access Journals (Sweden)

    Yuriy YATSUK

    2015-09-01

    Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.

  1. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  2. Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software

    Science.gov (United States)

    Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg

    2017-09-01

    100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.

  3. Cointegration, error-correction, and the relationship between GDP and energy. The case of South Korea and Singapore

    International Nuclear Information System (INIS)

    Glasure, Yong U.; Lee, Aie-Rie

    1998-01-01

    This paper examines the causality issue between energy consumption and GDP for South Korea and Singapore, with the aid of cointegration and error-correction modeling. Results of the cointegration and error-correction models indicate bidirectional causality between GDP and energy consumption for both South Korea and Singapore. However, results of the standard Granger causality tests show no causal relationship between GDP and energy consumption for South Korea and unidirectional causal relationship from energy consumption to GDP for Singapore

  4. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    Science.gov (United States)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  5. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  6. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    Science.gov (United States)

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  7. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  8. Quantum secret sharing based on quantum error-correcting codes

    International Nuclear Information System (INIS)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)

  9. High speed and adaptable error correction for megabit/s rate quantum key distribution.

    Science.gov (United States)

    Dixon, A R; Sato, H

    2014-12-02

    Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.

  10. Asynchronous error-correcting secure communication scheme based on fractional-order shifting chaotic system

    Science.gov (United States)

    Chao, Luo

    2015-11-01

    In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.

  11. Evaluation of Setup Error Correction for Patients Using On Board Imager in Image Guided Radiation Therapy

    International Nuclear Information System (INIS)

    Kang, Soo Man

    2008-01-01

    To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.

  12. Evaluation of Setup Error Correction for Patients Using On Board Imager in Image Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Soo Man [Dept. of Radiation Oncology, Kosin University Gospel Hospital, Busan (Korea, Republic of)

    2008-09-15

    To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.

  13. Application of the No Action Level (NAL) protocol to correct for prostate motion based on electronic portal imaging of implanted markers

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Os, Marjolein J.H. van; Jansen, Peter P.; Heijmen, Ben J.M.

    2005-01-01

    Purpose: To evaluate the efficacy of the No Action Level (NAL) off-line correction protocol in the reduction of systematic prostate displacements as determined from electronic portal images (EPI) using implanted markers. Methods and materials: Four platinum markers, two near the apex and two near the base of the prostate, were implanted for localization purposes in patients who received fractionated high dose rate brachytherapy. During the following course of 25 fractions of external beam radiotherapy, the position of each marker relative to the corresponding position in digitally reconstructed radiographs (DRRs) was measured in EPI in 15 patients for on average 17 fractions per patient. These marker positions yield the composite displacements due to both setup error and internal prostate motion, relative to the planning computed tomography scan. As the NAL protocol is highly effective in reducing systematic errors (recurring each fraction) due to setup inaccuracy alone, we investigated its efficacy in reducing systematic composite displacements. The analysis was performed for the center of mass (COM) of the four markers, as well as for the cranial and caudal markers separately. Furthermore, the impact of prostate rotation on the achieved positioning accuracy was determined. Results: In case of no setup corrections, the standard deviations of the systematic composite displacements of the COM were 3-4 mm in the craniocaudal and anterior-posterior directions, and 2 mm in the left-right direction. The corresponding SDs of the random displacements (interfraction fluctuations) were 2-3 mm in each direction. When applying a NAL protocol based on three initial treatment fractions, the SDs of the systematic COM displacements were reduced to 1-2 mm. Displacements at the cranial end of the prostate were slightly larger than at the caudal end, and quantitative analysis showed this originates from left-right axis rotations about the prostate apex. Further analysis revealed

  14. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Science.gov (United States)

    2010-01-01

    ... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...

  15. The Differential Effect of Two Types of Direct Written Corrective Feedback on Noticing and Uptake: Reformulation vs. Error Correction

    Directory of Open Access Journals (Sweden)

    Rosa M. Manchón

    2010-06-01

    Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la

  16. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  17. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies

    DEFF Research Database (Denmark)

    Tybjærg-Hansen, Anne

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...

  18. Evaluation of different set-up error corrections on dose-volume metrics in prostate IMRT using CBCT images

    International Nuclear Information System (INIS)

    Hirose, Yoshinori; Tomita, Tsuneyuki; Kitsuda, Kenji; Notogawa, Takuya; Miki, Katsuhito; Nakamura, Mitsuhiro; Nakamura, Kiyonao; Ishigaki, Takashi

    2014-01-01

    We investigated the effect of different set-up error corrections on dose-volume metrics in intensity-modulated radiotherapy (IMRT) for prostate cancer under different planning target volume (PTV) margin settings using cone-beam computed tomography (CBCT) images. A total of 30 consecutive patients who underwent IMRT for prostate cancer were retrospectively analysed, and 7-14 CBCT datasets were acquired per patient. Interfractional variations in dose-volume metrics were evaluated under six different set-up error corrections, including tattoo, bony anatomy, and four different target matching groups. Set-up errors were incorporated into planning the isocenter position, and dose distributions were recalculated on CBCT images. These processes were repeated under two different PTV margin settings. In the on-line bony anatomy matching groups, systematic error (Σ) was 0.3 mm, 1.4 mm, and 0.3 mm in the left-right, anterior-posterior (AP), and superior-inferior directions, respectively. Σ in three successive off-line target matchings was finally comparable with that in the on-line bony anatomy matching in the AP direction. Although doses to the rectum and bladder wall were reduced for a small PTV margin, averaged reductions in the volume receiving 100% of the prescription dose from planning were within 2.5% under all PTV margin settings for all correction groups, with the exception of the tattoo set-up error correction only (≥ 5.0%). Analysis of variance showed no significant difference between on-line bony anatomy matching and target matching. While variations between the planned and delivered doses were smallest when target matching was applied, the use of bony anatomy matching still ensured the planned doses. (author)

  19. Psychometric properties of the national eye institute refractive error correction quality-of-life questionnaire among Iranian patients

    Directory of Open Access Journals (Sweden)

    Amir H Pakpour

    2013-01-01

    Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.

  20. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    Science.gov (United States)

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  1. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  2. Research and application of a novel hybrid decomposition-ensemble learning paradigm with error correction for daily PM10 forecasting

    Science.gov (United States)

    Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang

    2018-03-01

    In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.

  3. Editing disulphide bonds: error correction using redox currencies.

    Science.gov (United States)

    Ito, Koreaki

    2010-01-01

    The disulphide bond-introducing enzyme of bacteria, DsbA, sometimes oxidizes non-native cysteine pairs. DsbC should rearrange the resulting incorrect disulphide bonds into those with correct connectivity. DsbA and DsbC receive oxidizing and reducing equivalents, respectively, from respective redox components (quinones and NADPH) of the cell. Two mechanisms of disulphide bond rearrangement have been proposed. In the redox-neutral 'shuffling' mechanism, the nucleophilic cysteine in the DsbC active site forms a mixed disulphide with a substrate and induces disulphide shuffling within the substrate part of the enzyme-substrate complex, followed by resolution into a reduced enzyme and a disulphide-rearranged substrate. In the 'reduction-oxidation' mechanism, DsbC reduces those substrates with wrong disulphides so that DsbA can oxidize them again. In this issue of Molecular Microbiology, Berkmen and his collaborators show that a disulphide reductase, TrxP, from an anaerobic bacterium can substitute for DsbC in Escherichia coli. They propose that the reduction-oxidation mechanism of disulphide rearrangement can indeed operate in vivo. An implication of this work is that correcting errors in disulphide bonds can be coupled to cellular metabolism and is conceptually similar to the proofreading processes observed with numerous synthesis and maturation reactions of biological macromolecules.

  4. A note on a fatal error of optimized LFC private information retrieval scheme and its corrected results

    DEFF Research Database (Denmark)

    Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane

    2010-01-01

    A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper......, we point out a fatal error of overflow contained in OLFCPIR and show how the error can be corrected. Finally, we compare with LFCPIR to show that the communication cost of our corrected OLFCPIR is asymptotically the same as the previous LFCPIR....

  5. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  6. Error Correcting Codes

    Indian Academy of Sciences (India)

    successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.

  7. Understanding the dynamics of correct and error responses in free recall: evidence from externalized free recall.

    Science.gov (United States)

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2010-06-01

    The dynamics of correct and error responses in a variant of delayed free recall were examined in the present study. In the externalized free recall paradigm, participants were presented with lists of words and were instructed to subsequently recall not only the words that they could remember from the most recently presented list, but also any other words that came to mind during the recall period. Externalized free recall is useful for elucidating both sampling and postretrieval editing processes, thereby yielding more accurate estimates of the total number of error responses, which are typically sampled and subsequently edited during free recall. The results indicated that the participants generally sampled correct items early in the recall period and then transitioned to sampling more erroneous responses. Furthermore, the participants generally terminated their search after sampling too many errors. An examination of editing processes suggested that the participants were quite good at identifying errors, but this varied systematically on the basis of a number of factors. The results from the present study are framed in terms of generate-edit models of free recall.

  8. Correction of electrode modelling errors in multi-frequency EIT imaging.

    Science.gov (United States)

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  9. Halogen Bonding from Dispersion-Corrected Density-Functional Theory: The Role of Delocalization Error.

    Science.gov (United States)

    Otero-de-la-Roza, A; Johnson, Erin R; DiLabio, Gino A

    2014-12-09

    Halogen bonds are formed when a Lewis base interacts with a halogen atom in a different molecule, which acts as an electron acceptor. Due to its charge transfer component, halogen bonding is difficult to model using many common density-functional approximations because they spuriously overstabilize halogen-bonded dimers. It has been suggested that dispersion-corrected density functionals are inadequate to describe halogen bonding. In this work, we show that the exchange-hole dipole moment (XDM) dispersion correction coupled with functionals that minimize delocalization error (for instance, BH&HLYP, but also other half-and-half functionals) accurately model halogen-bonded interactions, with average errors similar to other noncovalent dimers with less charge-transfer effects. The performance of XDM is evaluated for three previously proposed benchmarks (XB18 and XB51 by Kozuch and Martin, and the set proposed by Bauzá et al.) spanning a range of binding energies up to ∼50 kcal/mol. The good performance of BH&HLYP-XDM is comparable to M06-2X, and extends to the "extreme" cases in the Bauzá set. This set contains anionic electron donors where charge transfer occurs even at infinite separation, as well as other charge transfer dimers belonging to the pnictogen and chalcogen bonding classes. We also show that functional delocalization error results in an overly delocalized electron density and exact-exchange hole. We propose intermolecular Bader delocalization indices as an indicator of both the donor-acceptor character of an intermolecular interaction and the delocalization error coming from the underlying functional.

  10. Sources of medical error in refractive surgery.

    Science.gov (United States)

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  11. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    Science.gov (United States)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  12. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin; Kalnis, Panos; Solovyev, Victor

    2015-01-01

    accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality

  13. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    International Nuclear Information System (INIS)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  14. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    Science.gov (United States)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  15. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    Science.gov (United States)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  16. Residual translational and rotational errors after kV X-ray image-guided correction of prostate location using implanted fiducials

    International Nuclear Information System (INIS)

    Wust, Peter; Graf, Reinhold; Boehmer, Dirk; Budach, Volker

    2010-01-01

    Purpose: To evaluate the residual errors and required safety margins after stereoscopic kilovoltage (kV) X-ray target localization of the prostate in image-guided radiotherapy (IGRT) using internal fiducials. Patients and Methods: Radiopaque fiducial markers (FMs) have been inserted into the prostate in a cohort of 33 patients. The ExacTrac/Novalis Body trademark X-ray 6d image acquisition system (BrainLAB AG, Feldkirchen, Germany) was used. Corrections were performed in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) direction. Rotational errors around LR (x-axis), AP (y) and SI (z) have been recorded for the first series of nine patients, and since 2007 for the subsequent 24 patients in addition corrected in each fraction by using the Robotic Tilt Module trademark and Varian Exact Couch trademark. After positioning, a second set of X-ray images was acquired for verification purposes. Residual errors were registered and again corrected. Results: Standard deviations (SD) of residual translational random errors in LR, AP, and SI coordinates were 1.3, 1.7, and 2.2 mm. Residual random rotation errors were found for lateral (around x, tilt), vertical (around y, table), and longitudinal (around z, roll) and of 3.2 , 1.8 , and 1.5 . Planning target volume (PTV)-clinical target volume (CTV) margins were calculated in LR, AP, and SI direction to 2.3, 3.0, and 3.7 mm. After a second repositioning, the margins could be reduced to 1.8, 2.1, and 1.8 mm. Conclusion: On the basis of the residual setup error measurements, the margin required after one to two online X-ray corrections for the patients enrolled in this study would be at minimum 2 mm. The contribution of intrafractional motion to residual random errors has to be evaluated. (orig.)

  17. Residual translational and rotational errors after kV X-ray image-guided correction of prostate location using implanted fiducials

    Energy Technology Data Exchange (ETDEWEB)

    Wust, Peter [Dept. of Radiation Oncology, Charite - Univ. Medicine Berlin, Campus Virchow-Klinikum, Berlin (Germany); Graf, Reinhold; Boehmer, Dirk; Budach, Volker

    2010-10-15

    Purpose: To evaluate the residual errors and required safety margins after stereoscopic kilovoltage (kV) X-ray target localization of the prostate in image-guided radiotherapy (IGRT) using internal fiducials. Patients and Methods: Radiopaque fiducial markers (FMs) have been inserted into the prostate in a cohort of 33 patients. The ExacTrac/Novalis Body trademark X-ray 6d image acquisition system (BrainLAB AG, Feldkirchen, Germany) was used. Corrections were performed in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) direction. Rotational errors around LR (x-axis), AP (y) and SI (z) have been recorded for the first series of nine patients, and since 2007 for the subsequent 24 patients in addition corrected in each fraction by using the Robotic Tilt Module trademark and Varian Exact Couch trademark. After positioning, a second set of X-ray images was acquired for verification purposes. Residual errors were registered and again corrected. Results: Standard deviations (SD) of residual translational random errors in LR, AP, and SI coordinates were 1.3, 1.7, and 2.2 mm. Residual random rotation errors were found for lateral (around x, tilt), vertical (around y, table), and longitudinal (around z, roll) and of 3.2 , 1.8 , and 1.5 . Planning target volume (PTV)-clinical target volume (CTV) margins were calculated in LR, AP, and SI direction to 2.3, 3.0, and 3.7 mm. After a second repositioning, the margins could be reduced to 1.8, 2.1, and 1.8 mm. Conclusion: On the basis of the residual setup error measurements, the margin required after one to two online X-ray corrections for the patients enrolled in this study would be at minimum 2 mm. The contribution of intrafractional motion to residual random errors has to be evaluated. (orig.)

  18. Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.

    Science.gov (United States)

    Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias

    2014-08-01

    Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.

  19. Correction of phase-shifting error in wavelength scanning digital holographic microscopy

    Science.gov (United States)

    Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-05-01

    Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.

  20. Some errors in respirometry of aquatic breathers: How to avoid and correct for them

    DEFF Research Database (Denmark)

    STEFFENSEN, JF

    1989-01-01

    Respirometry in closed and flow-through systems is described with the objective of pointing out problems and sources of errors involved and how to correct for them. Both closed respirometry applied to resting and active animals and intermillent-flow respirometry is described. In addition, flow...

  1. Correcting for multivariate measurement error by regression calibration in meta-analyses of epidemiological studies.

    NARCIS (Netherlands)

    Kromhout, D.

    2009-01-01

    Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the

  2. Multiple Δt strategy for particle image velocimetry (PIV) error correction, applied to a hot propulsive jet

    Science.gov (United States)

    Nogueira, J.; Lecuona, A.; Nauri, S.; Legrand, M.; Rodríguez, P. A.

    2009-07-01

    PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ~0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes.

  3. Finding error handling bugs in OpenSSL using Coccinelle

    DEFF Research Database (Denmark)

    Lawall, Julia; Laurie, Ben; Hansen, René Rydhof

    2010-01-01

    OpenSSL is a library providing various functionalities relating to secure network communication.  Detecting and fixing bugs in OpenSSL code is thus essential, particularly when such bugs can lead to malicious attacks.  In previous work, we have proposed a methodology for finding API usage protocols...... in Linux kernel code using the program matching and transformation engine Coccinelle.  In this work, we report on our experience in applying this methodology to OpenSSL, focusing on API usage protocols related to error handling.  We have detected over 30 bugs in a recent OpenSSL snapshot, and in many cases...... it was possible to correct the bugs automatically.  Our patches correcting these bugs have been accepted by the OpenSSL developers.  This work furthermore confirms the applicability of our methodology to user-level code....

  4. Self-interaction error in density functional theory: a mean-field correction for molecules and large systems

    International Nuclear Information System (INIS)

    Ciofini, Ilaria; Adamo, Carlo; Chermette, Henry

    2005-01-01

    Corrections to the self-interaction error which is rooted in all standard exchange-correlation functionals in the density functional theory (DFT) have become the object of an increasing interest. After an introduction reminding the origin of the self-interaction error in the DFT formalism, and a brief review of the self-interaction free approximations, we present a simple, yet effective, self-consistent method to correct this error. The model is based on an average density self-interaction correction (ADSIC), where both exchange and Coulomb contributions are screened by a fraction of the electron density. The ansatz on which the method is built makes it particularly appealing, due to its simplicity and its favorable scaling with the size of the system. We have tested the ADSIC approach on one of the classical pathological problem for density functional theory: the direct estimation of the ionization potential from orbital eigenvalues. A large set of different chemical systems, ranging from simple atoms to large fullerenes, has been considered as test cases. Our results show that the ADSIC approach provides good numerical values for all the molecular systems, the agreement with the experimental values increasing, due to its average ansatz, with the size (conjugation) of the systems

  5. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  6. Validation of corrections for errors in collimation during measurement of gastric emptying of nuclide-labeled meals

    Energy Technology Data Exchange (ETDEWEB)

    Van Deventer, G.; Thomson, J.; Graham, L.S.; Thomasson, D.; Meyer, J.H.

    1983-03-01

    The study was undertaken to validate phantom-derived corrections for errors in collimation due to septal penetration or scatter, which vary with the size of the gastric region of interest (ROI). Six volunteers received 495 ml of 20% glucose labeled with both In-113m DTPA and Tc-99m DTPA. Gastric emptying of each nuclide was monitored by gamma camera as well as by periodic removal and reinstillation of the meal through a gastric tube. Serial aspirates from the gastric tube confirmed parallel emptying of In-113m and Tc-99m, but analyses of gamma-camera data yielded parallel emptying only when adequate corrections were made for errors in collimation. Analyses of ratios of gastric counts from anterior to posterior, as well as analyses of peak-to-scatter ratios, revealed only small, insignificant anteroposterior movement of the tracers within the stomach during emptying. Accordingly, there was no significant improvement in the camera data when corrections were made for attenuation with intragastric depth.

  7. Iterative Phase Optimization of Elementary Quantum Error Correcting Codes (Open Access, Publisher’s Version)

    Science.gov (United States)

    2016-08-24

    to the seven-qubit Steane code [29] and also represents the smallest instance of a 2D topological color code [30]. Since the realized quantum error...Quantum Computations on a Topologically Encoded Qubit, Science 345, 302 (2014). [17] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D...Memory, J. Math . Phys. (N.Y.) 43, 4452 (2002). [20] B. M. Terhal, Quantum Error Correction for Quantum Memories, Rev. Mod. Phys. 87, 307 (2015). [21] D

  8. Strictly local one-dimensional topological quantum error correction with symmetry-constrained cellular automata

    Directory of Open Access Journals (Sweden)

    Nicolai Lang, Hans Peter Büchler

    2018-01-01

    Full Text Available Active quantum error correction on topological codes is one of the most promising routes to long-term qubit storage. In view of future applications, the scalability of the used decoding algorithms in physical implementations is crucial. In this work, we focus on the one-dimensional Majorana chain and construct a strictly local decoder based on a self-dual cellular automaton. We study numerically and analytically its performance and exploit these results to contrive a scalable decoder with exponentially growing decoherence times in the presence of noise. Our results pave the way for scalable and modular designs of actively corrected one-dimensional topological quantum memories.

  9. Retrospective analysis of prostate cancer patients with implanted gold markers using off-line and adaptive therapy protocols

    International Nuclear Information System (INIS)

    Litzenberg, Dale W.; Balter, James M.; Lam, Kwok L.; Sandler, Howard M.; Ten Haken, Randall K.

    2005-01-01

    Purpose: To determine the efficacy of applying adaptive and off-line setup correction models to bony anatomy and gold fiducial markers implanted in the prostate, relative to daily alignment to skin tattoos and daily on-line corrections of the implanted gold markers. Methods and Materials: Ten prostate cancer patients with implanted gold fiducial markers were treated using a daily on-line setup correction protocol. The patients' positions were aligned to skin tattoos and two orthogonal diagnostic digital radiographs were obtained before treatment each day. These radiographs were compared with digitally reconstructed radiographs to obtain the translational setup errors of the bony anatomy and gold markers. The adaptive, no-action-level and shrinking-action-level off-line protocols were retrospectively applied to the bony anatomy to determine the change in the setup errors of the gold markers. The protocols were also applied to the gold markers directly to determine the residual setup errors. Results: The percentage of remaining fractions that the gold markers fell within the adaptive margins constructed with 1.5σ' (estimated random variation) after 5, 10, and 15 measurement fractions was 74%, 88%, and 93% for the prone patients and 55%, 77%, and 93% for the supine patients, respectively. Using 2σ', the percentage after 5, 10, and 15 measurements was 85%, 95%, and 97% for the prone patients and 68%, 87%, and 99% for the supine patients, respectively. The average initial three-dimensional (3D) setup error of the gold markers was 0.92 cm for the prone patients and 0.70 cm for the supine patients. Application of the no-action-level protocol to bony anatomy with N m = 3 days resulted in significant benefit to 4 of 10 patients, but 3 were significantly worse. The residual average 3D setup error of the gold markers was 1.14 cm and 0.51 cm for the prone and supine patients, respectively. When applied directly to the gold markers with N m = 3 days, 5 patients benefited and

  10. Comparison of error-based and errorless learning for people with severe traumatic brain injury: study protocol for a randomized control trial.

    Science.gov (United States)

    Ownsworth, Tamara; Fleming, Jennifer; Tate, Robyn; Shum, David H K; Griffin, Janelle; Schmidt, Julia; Lane-Brown, Amanda; Kendall, Melissa; Chevignard, Mathilde

    2013-11-05

    Poor skills generalization poses a major barrier to successful outcomes of rehabilitation after traumatic brain injury (TBI). Error-based learning (EBL) is a relatively new intervention approach that aims to promote skills generalization by teaching people internal self-regulation skills, or how to anticipate, monitor and correct their own errors. This paper describes the protocol of a study that aims to compare the efficacy of EBL and errorless learning (ELL) for improving error self-regulation, behavioral competency, awareness of deficits and long-term outcomes after TBI. This randomized, controlled trial (RCT) has two arms (EBL and ELL); each arm entails 8 × 2 h training sessions conducted within the participants' homes. The first four sessions involve a meal preparation activity, and the final four sessions incorporate a multitasking errand activity. Based on a sample size estimate, 135 participants with severe TBI will be randomized into either the EBL or ELL condition. The primary outcome measure assesses error self-regulation skills on a task related to but distinct from training. Secondary outcomes include measures of self-monitoring and self-regulation, behavioral competency, awareness of deficits, role participation and supportive care needs. Assessments will be conducted at pre-intervention, post-intervention, and at 6-months post-intervention. This study seeks to determine the efficacy and long-term impact of EBL for training internal self-regulation strategies following severe TBI. In doing so, the study will advance theoretical understanding of the role of errors in task learning and skills generalization. EBL has the potential to reduce the length and costs of rehabilitation and lifestyle support because the techniques could enhance generalization success and lifelong application of strategies after TBI. ACTRN12613000585729.

  11. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    Science.gov (United States)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  12. Multiple Δt strategy for particle image velocimetry (PIV) error correction, applied to a hot propulsive jet

    International Nuclear Information System (INIS)

    Nogueira, J; Lecuona, A; Nauri, S; Legrand, M; Rodríguez, P A

    2009-01-01

    PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ∼0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes

  13. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  14. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  15. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    Science.gov (United States)

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  16. eNAL: An Extension of the NAL Setup Correction Protocol for Effective Use of Weekly Follow-up Measurements

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Heijmen, Ben J.M.

    2007-01-01

    Purpose: The no action level (NAL) protocol reduces systematic displacements relative to the planning CT scan by using the mean displacement of the first few treatment fractions as a setup correction in all subsequent fractions. This approach may become nonoptimal in case of time trends or transitions in the systematic displacement of a patient. Here, the extended NAL (eNAL) protocol is introduced to cope with this problem. Methods and Materials: The initial setup correction of eNAL is the same as in NAL. However, in eNAL, additional weekly follow-up measurements are performed. The setup correction is updated after each follow-up measurement based on linear regression of the available measured displacements to track and correct systematic time-dependent changes. We investigated the performance of eNAL with Monte Carlo simulations for populations without systematic displacement changes over time, with large gradual changes (time trends), and with large sudden changes (transitions). Weekly follow-up measurements were simulated for 35 treatment fractions. We compared the outcome of eNAL with NAL and optimized shrinking action level (SAL) protocol with weekly measurements. Results: Without time-dependent changes, eNAL, SAL, and NAL performed comparably, but SAL required the largest imaging workload. For time trends and transitions, eNAL performed superiorly to the other protocols and reduced systematic displacements to the same magnitude as in case of no time-dependent changes (SD ∼1 mm). Conclusion: Extended NAL can reduce systematic displacements to a minor level irrespective of the precise nature of the systematic time-dependent changes that may occur in a population

  17. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  18. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  19. SU-F-I-03: Correction of Intra-Fractional Set-Up Errors and Target Coverage Based On Cone-Beam Computed Tomography for Cervical Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, JY [Cancer Hospital of Shantou University Medical College, Shantou, Guangdong (China); Hong, DL [The First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong (China)

    2016-06-15

    Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed. Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.

  20. The dynamics of entry, exit and profitability: an error correction approach for the retail industry

    NARCIS (Netherlands)

    M.A. Carree (Martin); A.R. Thurik (Roy)

    1994-01-01

    textabstractWe develop a two equation error correction model to investigate determinants of and dynamic interaction between changes in profits and number of firms in retailing. An explicit distinction is made between the effects of actual competition among incumbants, new firms competition and

  1. An improved machine learning protocol for the identification of correct Sequest search results

    Directory of Open Access Journals (Sweden)

    Lu Hui

    2010-12-01

    Full Text Available Abstract Background Mass spectrometry has become a standard method by which the proteomic profile of cell or tissue samples is characterized. To fully take advantage of tandem mass spectrometry (MS/MS techniques in large scale protein characterization studies robust and consistent data analysis procedures are crucial. In this work we present a machine learning based protocol for the identification of correct peptide-spectrum matches from Sequest database search results, improving on previously published protocols. Results The developed model improves on published machine learning classification procedures by 6% as measured by the area under the ROC curve. Further, we show how the developed model can be presented as an interpretable tree of additive rules, thereby effectively removing the 'black-box' notion often associated with machine learning classifiers, allowing for comparison with expert rule-of-thumb. Finally, a method for extending the developed peptide identification protocol to give probabilistic estimates of the presence of a given protein is proposed and tested. Conclusions We demonstrate the construction of a high accuracy classification model for Sequest search results from MS/MS spectra obtained by using the MALDI ionization. The developed model performs well in identifying correct peptide-spectrum matches and is easily extendable to the protein identification problem. The relative ease with which additional experimental parameters can be incorporated into the classification framework, to give additional discriminatory power, allows for future tailoring of the model to take advantage of information from specific instrument set-ups.

  2. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  3. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    Czech Academy of Sciences Publication Activity Database

    Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.

    2013-01-01

    Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error -correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188

  4. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    Czech Academy of Sciences Publication Activity Database

    Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.

    2013-01-01

    Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error-correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188

  5. Error and corrections with scintigraphic measurement of gastric emptying of solid foods

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, J.H.; Van Deventer, G.; Graham, L.S.; Thomson, J.; Thomasson, D.

    1983-03-01

    Previous methods for correction of depth used geometric means of simultaneously obtained anterior and posterior counts. The present study compares this method with a new one that uses computations of depth based on peak-to-scatter (P:S) ratios. Six normal volunteers were fed a meal of beef stew, water, and chicken liver that had been labeled in vivo with both In-113m and Tc-99m. Gastric emptying was followed at short intervals with anterior counts of peak and scattered radiation for each nuclide, as well as posteriorly collected peak counts from the gastric ROI. Depth of the nuclides was estimated by the P:S method as well as the older method. Both gave similar results. Errors from septal penetration or scatter proved to be a significantly larger problem than errors from changes in depth.

  6. Atlas-based analysis of cardiac shape and function: correction of regional shape bias due to imaging protocol for population studies.

    Science.gov (United States)

    Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A

    2013-09-13

    Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.

  7. Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.

    Science.gov (United States)

    Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold

    2002-02-01

    The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).

  8. The Export Supply Model of Bangladesh: An Application of Cointegration and Vector Error Correction Approaches

    Directory of Open Access Journals (Sweden)

    Mahmudul Mannan Toy

    2011-01-01

    Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.

  9. FPGA-based Bit-Error-Rate Tester for SEU-hardened Optical Links

    CERN Document Server

    Detraz, S; Moreira, P; Papadopoulos, S; Papakonstantinou, I; Seif El Nasr, S; Sigaud, C; Soos, C; Stejskal, P; Troska, J; Versmissen, H

    2009-01-01

    The next generation of optical links for future High-Energy Physics experiments will require components qualified for use in radiation-hard environments. To cope with radiation induced single-event upsets, the physical layer protocol will include Forward Error Correction (FEC). Bit-Error-Rate (BER) testing is a widely used method to characterize digital transmission systems. In order to measure the BER with and without the proposed FEC, simultaneously on several devices, a multi-channel BER tester has been developed. This paper describes the architecture of the tester, its implementation in a Xilinx Virtex-5 FPGA device and discusses the experimental results.

  10. Two-step single slope/SAR ADC with error correction for CMOS image sensor.

    Science.gov (United States)

    Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin

    2014-01-01

    Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k  μ m(2) · cycles/sample.

  11. Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor

    Directory of Open Access Journals (Sweden)

    Fang Tang

    2014-01-01

    Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.

  12. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal

    2013-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...

  13. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal

    2012-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...

  14. Goldmann tonometry tear film error and partial correction with a shaped applanation surface.

    Science.gov (United States)

    McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M

    2018-01-01

    The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.

  15. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    Science.gov (United States)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  16. An Experimental Study of Medical Error Explanations: Do Apology, Empathy, Corrective Action, and Compensation Alter Intentions and Attitudes?

    Science.gov (United States)

    Nazione, Samantha; Pace, Kristin

    2015-01-01

    Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.

  17. Ab initio thermochemistry using optimal-balance models with isodesmic corrections: The ATOMIC protocol

    Science.gov (United States)

    Bakowies, Dirk

    2009-04-01

    corrections to the simple task of adding up bond increments. Preliminary validation with experimental enthalpies of formation using the subset of neutral closed-shell (HCNOF) species contained in the G3/99 test set indicates that the ATOMIC protocol performs slightly better than the popular G3 approach. The newly introduced protocol does not require empirical calibration, however, and it is still efficient enough to be applied routinely to molecules with 10 or 20 nonhydrogen atoms.

  18. Fast high resolution ADC based on the flash type with a special error correcting technique

    Energy Technology Data Exchange (ETDEWEB)

    Xiao-Zhong, Liang; Jing-Xi, Cao [Beijing Univ. (China). Inst. of Atomic Energy

    1984-03-01

    A fast 12 bits ADC based on the flash type with a simple special error correcting technique which can effectively compensate the level drift of the discriminators and the droop of the stretcher voltage is described. The DNL is comparable with the Wilkinson's ADC and long term drift is far better than its.

  19. 5 CFR 839.622 - Can I cancel my FERS election if my qualifying retirement coverage error was previously corrected...

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Can I cancel my FERS election if my qualifying retirement coverage error was previously corrected and I now have an election opportunity under... ERRONEOUS RETIREMENT COVERAGE CORRECTIONS ACT Making an Election Fers Elections § 839.622 Can I cancel my...

  20. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  1. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  2. Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases

    NARCIS (Netherlands)

    Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.

    2009-01-01

    In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.

  3. Development and characterisation of FPGA modems using forward error correction for FSOC

    Science.gov (United States)

    Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried

    2016-05-01

    In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.

  4. Identifying and Correcting Timing Errors at Seismic Stations in and around Iran

    International Nuclear Information System (INIS)

    Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee

    2017-01-01

    A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.

  5. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    Science.gov (United States)

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  6. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Science.gov (United States)

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  7. What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?

    Science.gov (United States)

    Liebovitch, Larry

    1998-03-01

    evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.

  8. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  9. Evaluating the Performance Diagnostic Checklist-Human Services to Assess Incorrect Error-Correction Procedures by Preschool Paraprofessionals

    Science.gov (United States)

    Bowe, Melissa; Sellers, Tyra P.

    2018-01-01

    The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…

  10. Error Correction of Meteorological Data Obtained with Mini-AWSs Based on Machine Learning

    Directory of Open Access Journals (Sweden)

    Ji-Hun Ha

    2018-01-01

    Full Text Available Severe weather events occur more frequently due to climate change; therefore, accurate weather forecasts are necessary, in addition to the development of numerical weather prediction (NWP of the past several decades. A method to improve the accuracy of weather forecasts based on NWP is the collection of more meteorological data by reducing the observation interval. However, in many areas, it is economically and locally difficult to collect observation data by installing automatic weather stations (AWSs. We developed a Mini-AWS, much smaller than AWSs, to complement the shortcomings of AWSs. The installation and maintenance costs of Mini-AWSs are lower than those of AWSs; Mini-AWSs have fewer spatial constraints with respect to the installation than AWSs. However, it is necessary to correct the data collected with Mini-AWSs because they might be affected by the external environment depending on the installation area. In this paper, we propose a novel error correction of atmospheric pressure data observed with a Mini-AWS based on machine learning. Using the proposed method, we obtained corrected atmospheric pressure data, reaching the standard of the World Meteorological Organization (WMO; ±0.1 hPa, and confirmed the potential of corrected atmospheric pressure data as an auxiliary resource for AWSs.

  11. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  12. Prevalence and risk factors of undercorrected refractive errors among Singaporean Malay adults: the Singapore Malay Eye Study.

    Science.gov (United States)

    Rosman, Mohamad; Wong, Tien Y; Tay, Wan-Ting; Tong, Louis; Saw, Seang-Mei

    2009-08-01

    To describe the prevalence and the risk factors of undercorrected refractive error in an adult urban Malay population. This population-based, cross-sectional study was conducted in Singapore in 3280 Malay adults, aged 40 to 80 years. All individuals were examined at a centralized clinic and underwent standardized interviews and assessment of refractive errors and presenting and best corrected visual acuities. Distance presenting visual acuity was monocularly measured by using a logarithm of the minimum angle of resolution (logMAR) number chart at a distance of 4 m, with the participants wearing their "walk-in" optical corrections (spectacles or contact lenses), if any. Refraction was determined by subjective refraction by trained, certified study optometrists. Best corrected visual acuity was monocularly assessed and recorded in logMAR scores using the same test protocol as was used for presenting visual acuity. Undercorrected refractive error was defined as an improvement of at least 0.2 logMAR (2 lines equivalent) in the best corrected visual acuity compared with the presenting visual acuity in the better eye. The mean age of the subjects included in our study was 58 +/- 11 years, and 52% of the subjects were women. The prevalence rate of undercorrected refractive error among Singaporean Malay adults in our study (n = 3115) was 20.4% (age-standardized prevalence rate, 18.3%). More of the women had undercorrected refractive error than the men (21.8% vs. 18.8%, P = 0.04). Undercorrected refractive error was also more common in subjects older than 50 years than in subjects aged 40 to 49 years (22.6% vs. 14.3%, P Malay adults with refractive errors was higher than that of the Singaporean Chinese adults with refractive errors. Undercorrected refractive error is a significant cause of correctable visual impairment among Singaporean Malay adults, affecting one in five persons.

  13. Review of a fluid resuscitation protocol: "fluid creep" is not due to nursing error.

    Science.gov (United States)

    Faraklas, Iris; Cochran, Amalia; Saffle, Jeffrey

    2012-01-01

    Recent reviews of burn resuscitation have included the suggestion that "fluid creep" may be influenced by practitioner error. Our center uses a nursing-driven resuscitation protocol that permits titration of fluid based on hourly urine output, including the addition of colloid when patients fail to respond appropriately. The purpose of this study was to examine protocol compliance. We reviewed 140 patients (26 children) with burns of ≥20% TBSA who received protocol-directed resuscitation from 2005 to 2010. We compared each patient's actual hourly fluid infusion with that predicted by the protocol. Sixty-seven patients (48%) completed resuscitation using crystalloid alone, whereas 73 patients required colloid supplementation. Groups did not differ in age, gender, weight, or time from injury to admission. Patients requiring colloid had larger median total burns (33.0 vs 23.5% TBSA) and full-thickness burns (15.5 vs 4.5% TBSA) and more inhalation injuries (60.3 vs 28.4%; P patients had median predicted requirements of 5.4 ml/kg/%TBSA. Crystalloid-only patients required fluid volumes close to Parkland predictions (4.7 ml/kg/%TBSA), whereas patients who received colloid required more fluid than the predicted volume (7.5 ml/kg/%TBSA). However, the hourly difference between the predicted and received fluids was a median of only 1.0% (interquartile range: -6.1 to 11.1%) and did not differ between groups. Pediatric patients had greater calculated differences than adults. Crystalloid patients exhibited higher urine outputs than colloid patients until colloid was started, suggesting that early over-resuscitation did not contribute to fluid creep. Adherence to our protocol for burn shock resuscitation was excellent overall. Fluid creep exhibited by more seriously injured patients was not due to nurses' failure to follow the protocol. This review has illuminated some opportunities for practice improvement, possibly using a computerized decision support system.

  14. Correcting Measurement Error in Satellite Aerosol Optical Depth with Machine Learning for Modeling PM2.5 in the Northeastern USA

    Directory of Open Access Journals (Sweden)

    Allan C. Just

    2018-05-01

    Full Text Available Satellite-derived estimates of aerosol optical depth (AOD are key predictors in particulate air pollution models. The multi-step retrieval algorithms that estimate AOD also produce quality control variables but these have not been systematically used to address the measurement error in AOD. We compare three machine-learning methods: random forests, gradient boosting, and extreme gradient boosting (XGBoost to characterize and correct measurement error in the Multi-Angle Implementation of Atmospheric Correction (MAIAC 1 × 1 km AOD product for Aqua and Terra satellites across the Northeastern/Mid-Atlantic USA versus collocated measures from 79 ground-based AERONET stations over 14 years. Models included 52 quality control, land use, meteorology, and spatially-derived features. Variable importance measures suggest relative azimuth, AOD uncertainty, and the AOD difference in 30–210 km moving windows are among the most important features for predicting measurement error. XGBoost outperformed the other machine-learning approaches, decreasing the root mean squared error in withheld testing data by 43% and 44% for Aqua and Terra. After correction using XGBoost, the correlation of collocated AOD and daily PM2.5 monitors across the region increased by 10 and 9 percentage points for Aqua and Terra. We demonstrate how machine learning with quality control and spatial features substantially improves satellite-derived AOD products for air pollution modeling.

  15. Bias Correction and Random Error Characterization for the Assimilation of HRDI Line-of-Sight Wind Measurements

    Science.gov (United States)

    Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.

  16. Underlying Information Technology Tailored Quantum Error Correction

    Science.gov (United States)

    2006-07-28

    typically constructed by using an optical beam splitter . • We used a decoherence-free-subspace encoding to reduce the sensitivity of an optical Deutsch...process tomography on one- and two-photon polarisation states, from full and partial data "• Accomplished complete two-photon QPT. "• Discovered surprising...protocol giving a quadratic speedup over all previously known such protocols. • Developed the first completely positive non -Markovian master equation

  17. ANALISIS PENGARUH SUKU BUNGA, PENDAPATAN NASIONAL DAN INFLASI TERHADAP NILAI TUKAR NOMINAL : PENDEKATAN DENGAN COINTEGRATION DAN ERROR CORRECTION MODEL (ECM

    Directory of Open Access Journals (Sweden)

    Roosaleh Laksono T.Y.

    2016-04-01

    Full Text Available Abstract. This study aims to analyze the effect of interest rate, inflation, and national income on rupiah exchange rate against dollar both long-term balanced relationship and short-run balance of empirical data from 1980-2015 (36 years using secondary data. The research method used is multiple linear regression methods of OLS. This research method used to approach with cointegration and error correction model (ECM by previously passing some other stages of statistical testing. The results of the study with cointegration (Johansen Cointegration test indicate that all the independent variables (inflation, national income, and interest rate and the non-free variable (exchange rate have a long-term equilibrium relationship, as evidenced by the test results Where the trace statistic value of 102.1727 is much greater than the critical value (5% of 47.85613. In addition, the result of Maximum Eigenvalue Statistic is the result of 36.7908 greater than the critical value of 5%. 27,584434. While the results of the model error correction test (ECM that only variable inflation, interest rates and residual significant, while the variable national income is not significant. This means that the inflation and interest rate variables have a short-run relationship to the exchange rate, it is seen from the Probability (Prob. Value of each variable is 0,05 (5%, besides the residual coefficient on the ECM test result is -0,732447, it shows that error correction term is 73,24% and significant. Keywords: Interest rate; Nasional income; Inflation; Exchange rate; Cointegration; Error Correction Model. Abstrak. Penelitian  ini  bertujuan  untuk  menganalisa  pengaruh  Suku  bunga,  inflasi, dan Pendapatan Nasional terhadap nilai tukar rupiah terhadap dollar baik hubungan keseimbangan jangka panjang maupun keseimbangan jangka pendek data empiris  tahun 1980-2015 (36 tahun dengan menggunakan data sekunder. Metode  penelitian  yang  digunakan adalah regresi

  18. Effect of ancilla's structure on quantum error correction using the seven-qubit Calderbank-Shor-Steane code

    International Nuclear Information System (INIS)

    Salas, P.J.; Sanz, A.L.

    2004-01-01

    In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10 -4 ≤ε≤10 -2 for memory errors and 3x10 -5 ≤γ/7≤10 -2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits

  19. Computer simulations suggest that acute correction of hyperglycaemia with an insulin bolus protocol might be useful in brain FDG PET

    Energy Technology Data Exchange (ETDEWEB)

    Buchert, R.; Brenner, W.; Apostolova, I.; Mester, J.; Clausen, M. [University Medical Center Hamburg-Eppendorf (Germany). Dept. of Nuclear Medicine; Santer, R. [University Medical Center Hamburg-Eppendorf (Germany). Center for Gynaecology, Obstetrics and Paediatrics; Silverman, D.H.S. [David Geffen School of Medicine at UCLA, Los Angeles, CA (United States). Dept. of Molecular and Medical Pharmacology

    2009-07-01

    FDG PET in hyperglycaemic subjects often suffers from limited statistical image quality, which may hamper visual and quantitative evaluation. In our study the following insulin bolus protocol is proposed for acute correction of hyperglycaemia (> 7.0 mmol/l) in brain FDG PET. (i) Intravenous bolus injection of short-acting insulin, one I.E. for each 0.6 mmol/l blood glucose above 7.0. (ii) If 20 min after insulin administration plasma glucose is {<=} 7.0 mmol/l, proceed to (iii). If insulin has not taken sufficient effect step back to (i). Compute insulin dose with the updated blood glucose level. (iii) Wait further 20 min before injection of FDG. (iv) Continuous supervision of the patient during the whole scanning procedure. The potential of this protocol for improvement of image quality in brain FDG PET in hyperglycaemic subjects was evaluated by computer simulations within the Sokoloff model. A plausibility check of the prediction of the computer simulations on the magnitude of the effect that might be achieved by correction of hyperglycaemia was performed by retrospective evaluation of the relation between blood glucose level and brain FDG uptake in 89 subjects in whom FDG PET had been performed for diagnosis of Alzheimer's disease. The computer simulations suggested that acute correction of hyperglycaemia according to the proposed bolus insulin protocol might increase the FDG uptake of the brain by up to 80%. The magnitude of this effect was confirmed by the patient data. The proposed management protocol for acute correction of hyperglycaemia with insulin has the potential to significantly improve the statistical quality of brain FDG PET images. This should be confirmed in a prospective study in patients. (orig.)

  20. Computer simulations suggest that acute correction of hyperglycaemia with an insulin bolus protocol might be useful in brain FDG PET

    International Nuclear Information System (INIS)

    Buchert, R.; Brenner, W.; Apostolova, I.; Mester, J.; Clausen, M.; Santer, R.; Silverman, D.H.S.

    2009-01-01

    FDG PET in hyperglycaemic subjects often suffers from limited statistical image quality, which may hamper visual and quantitative evaluation. In our study the following insulin bolus protocol is proposed for acute correction of hyperglycaemia (> 7.0 mmol/l) in brain FDG PET. (i) Intravenous bolus injection of short-acting insulin, one I.E. for each 0.6 mmol/l blood glucose above 7.0. (ii) If 20 min after insulin administration plasma glucose is ≤ 7.0 mmol/l, proceed to (iii). If insulin has not taken sufficient effect step back to (i). Compute insulin dose with the updated blood glucose level. (iii) Wait further 20 min before injection of FDG. (iv) Continuous supervision of the patient during the whole scanning procedure. The potential of this protocol for improvement of image quality in brain FDG PET in hyperglycaemic subjects was evaluated by computer simulations within the Sokoloff model. A plausibility check of the prediction of the computer simulations on the magnitude of the effect that might be achieved by correction of hyperglycaemia was performed by retrospective evaluation of the relation between blood glucose level and brain FDG uptake in 89 subjects in whom FDG PET had been performed for diagnosis of Alzheimer's disease. The computer simulations suggested that acute correction of hyperglycaemia according to the proposed bolus insulin protocol might increase the FDG uptake of the brain by up to 80%. The magnitude of this effect was confirmed by the patient data. The proposed management protocol for acute correction of hyperglycaemia with insulin has the potential to significantly improve the statistical quality of brain FDG PET images. This should be confirmed in a prospective study in patients. (orig.)

  1. POSSIBILITIES TO CORRECT ACCOUNTING ERRORS IN THE CONTEXT OF COMPLYING WITH THE OPENING BALANCE SHEET INTANGIBILITY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    PALIU – POPA LUCIA

    2017-12-01

    Full Text Available There are still different views on the intangibility of the opening balance sheet at global level in the process of convergence and accounting harmonization. Fnding a total difference between the Anglo-Saxon accounting system and that of the Western European continental influence, in the sense that the former is less rigid in regard with the application of the principle of intangibility, whereas that of mainland inspiration apply the provisions of this principle in its entirety. Looking from this perspective and taking into account the major importance of the financial statements that are intended to provide information for all categories of users, ie both for managers and users external to the entity whose position does not allow them to request specific reports, we considered useful to conduct a study aimed at correcting the errors in the context of compliance with the opening balance sheet intangibility principle versus the need to adjust the comparative information on the financial position, financial performance and change in the financial position generated by the correction of the errors in the previous years. In this regard, we will perform a comparative analysis of the application of the intangibility principle both in the two major accounting systems and at international level and we will approach issues related to the correction of the errors in terms of the main differences between the provisions of the continental accounting regulations (represented by the European and national ones in our approach, Anglo-Saxon and those of the international referential on the opening balance sheet intangibility.

  2. A Technique for Real-Time Ionospheric Ranging Error Correction Based On Radar Dual-Frequency Detection

    Science.gov (United States)

    Lyu, Jiang-Tao; Zhou, Chen

    2017-12-01

    Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.

  3. Quantum states and their marginals. From multipartite entanglement to quantum error-correcting codes

    International Nuclear Information System (INIS)

    Huber, Felix Michael

    2017-01-01

    At the heart of the curious phenomenon of quantum entanglement lies the relation between the whole and its parts. In my thesis, I explore different aspects of this theme in the multipartite setting by drawing connections to concepts from statistics, graph theory, and quantum error-correcting codes: first, I address the case when joint quantum states are determined by their few-body parts and by Jaynes' maximum entropy principle. This can be seen as an extension of the notion of entanglement, with less complex states already being determined by their few-body marginals. Second, I address the conditions for certain highly entangled multipartite states to exist. In particular, I present the solution of a long-standing open problem concerning the existence of an absolutely maximally entangled state on seven qubits. This sheds light on the algebraic properties of pure quantum states, and on the conditions that constrain the sharing of entanglement amongst multiple particles. Third, I investigate Ulam's graph reconstruction problems in the quantum setting, and obtain legitimacy conditions of a set of states to be the reductions of a joint graph state. Lastly, I apply and extend the weight enumerator machinery from quantum error correction to investigate the existence of codes and highly entangled states in higher dimensions. This clarifies the physical interpretation of the weight enumerators and of the quantum MacWilliams identity, leading to novel applications in multipartite entanglement.

  4. Portal imaging to assess set-up errors, tumor motion and tumor shrinkage during conformal radiotherapy of non-small cell lung cancer

    International Nuclear Information System (INIS)

    Erridge, Sara C.; Seppenwoolde, Yvette; Muller, Sara H.; Herk, Marcel van; Jaeger, Katrien de; Belderbos, Jose S.A.; Boersma, Liesbeth J.; Lebesque, Joos V.

    2003-01-01

    Purpose: To investigate patient set-up, tumor movement and shrinkage during 3D conformal radiotherapy for non-small cell lung cancer. Materials and methods: In 97 patients, electronic portal images (EPIs) were acquired and corrected for set-up using an off-line correction protocol based on a shrinking action level. For 25 selected patients, the orthogonal EPIs (taken at random points in the breathing cycle) throughout the 6-7 week course of treatment were assessed to establish the tumor position in each image using both an overlay and a delineation technique. The range of movement in each direction was calculated. The position of the tumor in the digitally reconstructed radiograph (DRR) was compared to the average position of the lesion in the EPIs. In addition, tumor shrinkage was assessed. Results: The mean overall set-up errors after correction were 0, 0.6 and 0.2 mm in the x (left-right), y (cranial-caudal) and z (anterior-posterior) directions, respectively. After correction, the standard deviations (SDs) of systematic errors were 1.4, 1.5 and 1.3 mm and the SDs of random errors were 2.9, 3.1 and 2.0 mm in the x-, y- and z-directions, respectively. Without correction, 41% of patients had a set-up error of more than 5 mm vector length, but with the set-up correction protocol this percentage was reduced to 1%. The mean amplitude of tumor motion was 7.3 (SD 2.7), 12.5 (SD 7.3) and 9.4 mm (SD 5.2) in the x-, y- and z-directions, respectively. Tumor motion was greatest in the y-direction and in particular for lower lobe tumors. In 40% of the patients, the projected area of the tumor regressed by more than 20% during treatment in at least one projection. In 16 patients it was possible to define the position of the center of the tumor in the DRR. There was a mean difference of 6 mm vector length between the tumor position in the DRR and the average position in the portal images. Conclusions: The application of the correction protocol resulted in a significant

  5. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  6. Toward Synthesis, Analysis, and Certification of Security Protocols

    Science.gov (United States)

    Schumann, Johann

    2004-01-01

    Implemented security protocols are basically pieces of software which are used to (a) authenticate the other communication partners, (b) establish a secure communication channel between them (using insecure communication media), and (c) transfer data between the communication partners in such a way that these data only available to the desired receiver, but not to anyone else. Such an implementation usually consists of the following components: the protocol-engine, which controls in which sequence the messages of the protocol are sent over the network, and which controls the assembly/disassembly and processing (e.g., decryption) of the data. the cryptographic routines to actually encrypt or decrypt the data (using given keys), and t,he interface to the operating system and to the application. For a correct working of such a security protocol, all of these components must work flawlessly. Many formal-methods based techniques for the analysis of a security protocols have been developed. They range from using specific logics (e.g.: BAN-logic [4], or higher order logics [12] to model checking [2] approaches. In each approach, the analysis tries to prove that no (or at least not a modeled intruder) can get access to secret data. Otherwise, a scenario illustrating the &tack may be produced. Despite the seeming simplicity of security protocols ("only" a few messages are sent between the protocol partners in order to ensure a secure communication), many flaws have been detected. Unfortunately, even a perfect protocol engine does not guarantee flawless working of a security protocol, as incidents show. Many break-ins and security vulnerabilities are caused by exploiting errors in the implementation of the protocol engine or the underlying operating system. Attacks using buffer-overflows are a very common class of such attacks. Errors in the implementation of exception or error handling can open up additional vulnerabilities. For example, on a website with a log-in screen

  7. Compositional mining of multiple object API protocols through state abstraction.

    Science.gov (United States)

    Dai, Ziying; Mao, Xiaoguang; Lei, Yan; Qi, Yuhua; Wang, Rui; Gu, Bin

    2013-01-01

    API protocols specify correct sequences of method invocations. Despite their usefulness, API protocols are often unavailable in practice because writing them is cumbersome and error prone. Multiple object API protocols are more expressive than single object API protocols. However, the huge number of objects of typical object-oriented programs poses a major challenge to the automatic mining of multiple object API protocols: besides maintaining scalability, it is important to capture various object interactions. Current approaches utilize various heuristics to focus on small sets of methods. In this paper, we present a general, scalable, multiple object API protocols mining approach that can capture all object interactions. Our approach uses abstract field values to label object states during the mining process. We first mine single object typestates as finite state automata whose transitions are annotated with states of interacting objects before and after the execution of the corresponding method and then construct multiple object API protocols by composing these annotated single object typestates. We implement our approach for Java and evaluate it through a series of experiments.

  8. A fingerprint key binding algorithm based on vector quantization and error correction

    Science.gov (United States)

    Li, Liang; Wang, Qian; Lv, Ke; He, Ning

    2012-04-01

    In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.

  9. Estimating oil product demand in Indonesia using a cointegrating error correction model

    International Nuclear Information System (INIS)

    Dahl, C.

    2001-01-01

    Indonesia's long oil production history and large population mean that Indonesian oil reserves, per capita, are the lowest in OPEC and that, eventually, Indonesia will become a net oil importer. Policy-makers want to forestall this day, since oil revenue comprised around a quarter of both the government budget and foreign exchange revenues for the fiscal years 1997/98. To help policy-makers determine how economic growth and oil-pricing policy affect the consumption of oil products, we estimate the demand for six oil products and total petroleum consumption, using an error correction-cointegration approach, and compare it with estimates on a lagged endogenous model using data for 1970-95. (author)

  10. Tripartite entanglement in qudit stabilizer states and application in quantum error correction

    Energy Technology Data Exchange (ETDEWEB)

    Looi, Shiang Yong; Griffiths, Robert B. [Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2011-11-15

    Consider a stabilizer state on n qudits, each of dimension D with D being a prime or squarefree integer, divided into three mutually disjoint sets or parts. Generalizing a result of Bravyi et al.[J. Math. Phys. 47, 062106 (2006)] for qubits (D=2), we show that up to local unitaries, the three parts of the state can be written as tensor product of unentangled signle-qudit states, maximally entangled Einstein-Podolsky-Rosen (EPR) pairs, and tripartite Greenberger-Horne-Zeilinger (GHZ) states. We employ this result to obtain a complete characterization of the properties of a class of channels associated with stabilizer error-correcting codes, along with their complementary channels.

  11. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  12. The importance of matched poloidal spectra to error field correction in DIII-D

    Energy Technology Data Exchange (ETDEWEB)

    Paz-Soldan, C., E-mail: paz-soldan@fusion.gat.com; Lanctot, M. J.; Buttery, R. J.; La Haye, R. J.; Strait, E. J. [General Atomics, P.O. Box 85608, San Diego, California 92121 (United States); Logan, N. C.; Park, J.-K.; Solomon, W. M. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Shiraki, D.; Hanson, J. M. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 (United States)

    2014-07-15

    Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable “dominant” mode of the plasma is nulled at each toroidal mode number (n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios, the plasma is found to be, respectively, 7× and 20× less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated, any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.

  13. Goldmann tonometry tear film error and partial correction with a shaped applanation surface

    Directory of Open Access Journals (Sweden)

    McCafferty SJ

    2018-01-01

    Full Text Available Sean J McCafferty,1–4 Eniko T Enikov,5 Jim Schwiegerling,2,3 Sean M Ashley1,3 1Intuor Technologies, 2Department of Ophthalmology, University of Arizona College of Medicine, 3University of Arizona College of Optical Science, 4Arizona Eye Consultants, 5Department of Mechanical and Aerospace, University of Arizona College of Engineering, Tucson, AZ, USA Purpose: The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT prism and in a correcting applanation tonometry surface (CATS prism.Methods: The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms.Results: The CATS prism tear film adhesion error (2.74±0.21 mmHg was significantly less than the GAT prism (4.57±0.18 mmHg, p<0.001. Tear film adhesion error was independent of applanation mire thickness (R2=0.09, p=0.04. Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p<0.001. Cadaver eye validation indicated the CATS prism’s tear film adhesion error (1.40±0.51 mmHg was significantly less than that of the GAT prism (3.30±0.38 mmHg; p=0.002.Conclusion: Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error by ~41%. Fluorescein solution increases the tear film adhesion compared to

  14. Evaluation of Retro recon for SRS planning correction according to the error of recognize to coordinate

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Hyeon Seok; Jeong, Deok Yang; Do, Gyeong Min; Lee, Yeong Cheol; KIm, Sun Myung; Kim, Young Bun [Dept. of Radiation Oncology, Korea University Guro Hospital, Seoul (Korea, Republic of)

    2016-12-15

    The purpose of this study was to evaluate the Retro recon in SRS planning using BranLAB when stereotactic location error occurs by metal artifact. By CT simulator, image were acquired from head phantom(CIRS, PTW, USA). To observe stereotactic location recognizing and beam hardening, CT image were approved by SRS planning system(BrainLAB, Feldkirchen, Germany). In addition, we compared acquisition image(1.25mm slice thickness) and Retro recon image(using for 2.5 mm, 5mm slice thickness). To evaluate these three images quality, the test were performed by AAPM phantom study. In patient, it was verified stereotactic location error. All the location recognizing error did not occur in scanned image of phantom. AAPM phantom scan images all showed the same trend. Contrast resolution and Spatial resolution are under 6.4 mm, 1.0 mm. In case of noise and uniformity, under 11, 5 of HU were measured. In patient, the stereotactic location error was not occurred at reconstructive image. For BrainLAB planning, using Retro recon were corrected stereotactic error at beam hardening. Retro recon may be the preferred modality for radiation treatment planning and approving image quality.

  15. A correction for emittance-measurement errors caused by finite slit and collector widths

    International Nuclear Information System (INIS)

    Connolly, R.C.

    1992-01-01

    One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs

  16. Algebra for applications cryptography, secret sharing, error-correcting, fingerprinting, compression

    CERN Document Server

    Slinko, Arkadii

    2015-01-01

    This book examines the relationship between mathematics and data in the modern world. Indeed, modern societies are awash with data which must be manipulated in many different ways: encrypted, compressed, shared between users in a prescribed manner, protected from an unauthorised access and transmitted over unreliable channels. All of these operations can be understood only by a person with knowledge of basics in algebra and number theory. This book provides the necessary background in arithmetic, polynomials, groups, fields and elliptic curves that is sufficient to understand such real-life applications as cryptography, secret sharing, error-correcting, fingerprinting and compression of information. It is the first to cover many recent developments in these topics. Based on a lecture course given to third-year undergraduates, it is self-contained with numerous worked examples and exercises provided to test understanding. It can additionally be used for self-study.

  17. Performance Errors in Weight Training and Their Correction.

    Science.gov (United States)

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  18. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2018-04-15

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Effects of nonlinear error correction of measurements obtained by peak flowmeter using the Wright scale to assess asthma attack severity in children

    Directory of Open Access Journals (Sweden)

    Stamatović Dragana

    2007-01-01

    Full Text Available Introduction: Monitoring of peak expiratory flow (PEF is recommended in numerous guidelines for management of asthma. Improvements in calibration methods have demonstrated the inaccuracy of original Wright scale of peak flowmeter. A new standard, EN 13826 that was applied to peak flowmeter was adopted on 1st September 2004 by some European countries. Correction of PEF readings obtained with old type devices for measurement is possible by Dr M. Miller’s original predictive equation. Objective. Assessment of PEF correction effect on the interpretation of measurement results and management decisions. Method. In children with intermittent (35 or stable persistent asthma (75 aged 6-16 years, there were performed 8393 measurements of PEF by Vitalograph normal-range peak flowmeter with traditional Wright scale. Readings were expressed as percentage of individual best values (PB before and after correction. The effect of correction was analyzed based on The British Thoracic Society guidelines for asthma attack treatment. Results. In general, correction reduced the values of PEF (p<0.01. The highest mean percentage error (20.70% in the measured values was found in the subgroup in which PB ranged between 250 and 350 l/min. Nevertheless, the interpretation of PEF after the correction in this subgroup changed in only 2.41% of measurements. The lowest mean percentage error (15.72%, and, at the same time, the highest effect of correction on measurement results interpretation (in 22.65% readings were in children with PB above 450 l/min. In 73 (66.37% subjects, the correction changed the clinical interpretation of some values of PEF after correction. In 13 (11.8% patients, some corrected values indicated the absence or a milder degree of airflow obstruction. In 27 (24.54% children, more than 10%, and in 12 (10.93%, more than 20% of the corrected readings indicated a severe degree of asthma exacerbation that needed more aggressive treatment. Conclusion

  20. Author Correction

    DEFF Research Database (Denmark)

    Grundle, D S; Löscher, C R; Krahmann, G

    2018-01-01

    A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....

  1. Rapid Measurement and Correction of Phase Errors from B0 Eddy Currents: Impact on Image Quality for Non-Cartesian Imaging

    Science.gov (United States)

    Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.

    2014-01-01

    Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532

  2. Applying volumetric weather radar data for rainfall runoff modeling: The importance of error correction.

    Science.gov (United States)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.

    2009-04-01

    In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar

  3. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    Science.gov (United States)

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Publisher Correction

    DEFF Research Database (Denmark)

    Turcot, Valérie; Lu, Yingchang; Highland, Heather M

    2018-01-01

    In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....

  5. Bound on quantum computation time: Quantum error correction in a critical environment

    International Nuclear Information System (INIS)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2010-01-01

    We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.

  6. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    OpenAIRE

    Cohen, Aaron M.

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  7. Nonlinear effect of the structured light profilometry in the phase-shifting method and error correction

    International Nuclear Information System (INIS)

    Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun

    2014-01-01

    Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  8. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  9. Bayesian Estimation and Selection of Nonlinear Vector Error Correction Models: The Case of the Sugar-Ethanol-Oil Nexus in Brazil

    OpenAIRE

    Kelvin Balcombe; George Rapsomanikis

    2008-01-01

    Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest ...

  10. First order error corrections in common introductory physics experiments

    Science.gov (United States)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Comparison of orthogonal kilovolt X-ray images and cone-beam CT matching results in setup error assessment and correction for EB-PBI during free breathing

    International Nuclear Information System (INIS)

    Wang Wei; Li Jianbin; Hu Hongguang; Ma Zhifang; Xu Min; Fan Tingyong; Shao Qian; Ding Yun

    2014-01-01

    Objective: To compare the differences in setup error (SE) assessment and correction between the orthogonal kilovolt X-ray images and CBCT in EB-PBI patients during free breathing. Methods: Nineteen patients after breast conserving surgery EB-PBI were recruited. Interfraction SE was acquired using orthogonal kilovolt X-ray setup images and CBCT, after on-line setup correction,calculate the residual error and compare the SE, residual error and setup margin (SM) quantified for orthogonal kilovolt X-ray images and CBCT. Wilcoxon sign-rank test was used to evaluate the differences. Results: The CBCT based SE (systematic error, ∑) was smaller than the orthogonal kilovolt X-ray images based ∑ in AP direction (-1.2 mm vs 2.00 mm; P=0.005), and there was no statistically significant differences for three dimensional directions in random error (σ) (P=0.948, 0.376, 0.314). After on-line setup correction,CBCT decreases setup residual error than the orthogonal kilovolt X-ray images in AP direction (Σ: -0.20 mm vs 0.50 mm, P=0.008; σ: 0.45 mm vs 1.34 mm, P=0.002). And also the CBCT based SM was smaller than orthogonal kilovolt X-ray images based SM in AP direction (Σ: -1.39 mm vs 5.57 mm, P=0.003; σ: 0.00 mm vs 3.2 mm, P=0.003). Conclusions: Compared with kilovolt X-ray images, CBCT underestimate the setup error in the AP direction, but decreases setup residual error significantly.An image-guided radiotherapy and setup error assessment using kilovolt X-ray images for EB-PBI plans was feasible. (authors)

  13. Fast, efficient error reconciliation for quantum cryptography

    International Nuclear Information System (INIS)

    Buttler, W.T.; Lamoreaux, S.K.; Torgerson, J.R.; Nickel, G.H.; Donahue, C.H.; Peterson, C.G.

    2003-01-01

    We describe an error-reconciliation protocol, which we call Winnow, based on the exchange of parity and Hamming's 'syndrome' for N-bit subunits of a large dataset. The Winnow protocol was developed in the context of quantum-key distribution and offers significant advantages and net higher efficiency compared to other widely used protocols within the quantum cryptography community. A detailed mathematical analysis of the Winnow protocol is presented in the context of practical implementations of quantum-key distribution; in particular, the information overhead required for secure implementation is one of the most important criteria in the evaluation of a particular error-reconciliation protocol. The increase in efficiency for the Winnow protocol is largely due to the reduction in authenticated public communication required for its implementation

  14. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    Science.gov (United States)

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  15. Decreasing scoring errors on Wechsler Scale Vocabulary, Comprehension, and Similarities subtests: a preliminary study.

    Science.gov (United States)

    Linger, Michele L; Ray, Glen E; Zachar, Peter; Underhill, Andrea T; LoBello, Steven G

    2007-10-01

    Studies of graduate students learning to administer the Wechsler scales have generally shown that training is not associated with the development of scoring proficiency. Many studies report on the reduction of aggregated administration and scoring errors, a strategy that does not highlight the reduction of errors on subtests identified as most prone to error. This study evaluated the development of scoring proficiency specifically on the Wechsler (WISC-IV and WAIS-III) Vocabulary, Comprehension, and Similarities subtests during training by comparing a set of 'early test administrations' to 'later test administrations.' Twelve graduate students enrolled in an intelligence-testing course participated in the study. Scoring errors (e.g., incorrect point assignment) were evaluated on the students' actual practice administration test protocols. Errors on all three subtests declined significantly when scoring errors on 'early' sets of Wechsler scales were compared to those made on 'later' sets. However, correcting these subtest scoring errors did not cause significant changes in subtest scaled scores. Implications for clinical instruction and future research are discussed.

  16. Energy efficiency of error correcting mechanisms for wireless communications

    NARCIS (Netherlands)

    Havinga, Paul J.M.

    We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal

  17. Transfer Error and Correction Approach in Mobile Network

    Science.gov (United States)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  18. Performance Analysis of an Optical CDMA MAC Protocol With Variable-Size Sliding Window

    Science.gov (United States)

    Mohamed, Mohamed Aly A.; Shalaby, Hossam M. H.; Abdel-Moety El-Badawy, El-Sayed

    2006-10-01

    A media access control protocol for optical code-division multiple-access packet networks with variable length data traffic is proposed. This protocol exhibits a sliding window with variable size. A model for interference-level fluctuation and an accurate analysis for channel usage are presented. Both multiple-access interference (MAI) and photodetector's shot noise are considered. Both chip-level and correlation receivers are adopted. The system performance is evaluated using a traditional average system throughput and average delay. Finally, in order to enhance the overall performance, error control codes (ECCs) are applied. The results indicate that the performance can be enhanced to reach its peak using the ECC with an optimum number of correctable errors. Furthermore, chip-level receivers are shown to give much higher performance than that of correlation receivers. Also, it has been shown that MAI is the main source of signal degradation.

  19. MODEL PERMINTAAN UANG DI INDONESIA DENGAN PENDEKATAN VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    imam mukhlis

    2016-09-01

    Full Text Available This research aims to estimate the demand for money model in Indonesia for 2005.2-2015.12. The variables used in this research are ; demand for money, interest rate, inflation, and exchange rate (IDR/US$. The stationary test with ADF used to test unit root in the data. Cointegration test applied to estimate the long run relationship berween variables. This research employed the Vector Error Correction Model (VECM to estimate the money demand model in Indonesia. The results showed that all the data was stationer at the difference level (1%. There were long run relationship between interest rate, inflation and exchange rate to demand for money in Indonesia. The VECM model could not explaine interaction between explanatory variables to independent variables. In the short run, there were not relationship between interest rate, inflation and exchange rate to demand for money in Indonesia for 2005.2-2015.12

  20. Precursors, gauge invariance, and quantum error correction in AdS/CFT

    Energy Technology Data Exchange (ETDEWEB)

    Freivogel, Ben; Jefferson, Robert A.; Kabir, Laurens [ITFA and GRAPPA, Universiteit van Amsterdam,Science Park 904, Amsterdam (Netherlands)

    2016-04-19

    A puzzling aspect of the AdS/CFT correspondence is that a single bulk operator can be mapped to multiple different boundary operators, or precursors. By improving upon a recent model of Mintun, Polchinski, and Rosenhaus, we demonstrate explicitly how this ambiguity arises in a simple model of the field theory. In particular, we show how gauge invariance in the boundary theory manifests as a freedom in the smearing function used in the bulk-boundary mapping, and explicitly show how this freedom can be used to localize the precursor in different spatial regions. We also show how the ambiguity can be understood in terms of quantum error correction, by appealing to the entanglement present in the CFT. The concordance of these two approaches suggests that gauge invariance and entanglement in the boundary field theory are intimately connected to the reconstruction of local operators in the dual spacetime.

  1. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    DEFF Research Database (Denmark)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which...... already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...

  2. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  3. Quantum protocols within Spekkens' toy model

    Science.gov (United States)

    Disilvestro, Leonardo; Markham, Damian

    2017-05-01

    Quantum mechanics is known to provide significant improvements in information processing tasks when compared to classical models. These advantages range from computational speedups to security improvements. A key question is where these advantages come from. The toy model developed by Spekkens [R. W. Spekkens, Phys. Rev. A 75, 032110 (2007), 10.1103/PhysRevA.75.032110] mimics many of the features of quantum mechanics, such as entanglement and no cloning, regarded as being important in this regard, despite being a local hidden variable theory. In this work, we study several protocols within Spekkens' toy model where we see it can also mimic the advantages and limitations shown in the quantum case. We first provide explicit proofs for the impossibility of toy bit commitment and the existence of a toy error correction protocol and consequent k -threshold secret sharing. Then, defining a toy computational model based on the quantum one-way computer, we prove the existence of blind and verified protocols. Importantly, these two last quantum protocols are known to achieve a better-than-classical security. Our results suggest that such quantum improvements need not arise from any Bell-type nonlocality or contextuality, but rather as a consequence of steering correlations.

  4. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    Science.gov (United States)

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  5. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Xingming Sun

    2015-07-01

    Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  6. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  7. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  8. Error Correction of Radial Displacement in Grinding Machine Tool Spindle by Optimizing Shape and Bearing Tuning

    OpenAIRE

    Khairul Jauhari; Achmad Widodo; Ismoyo Haryanto

    2015-01-01

    In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by resi...

  9. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x...... measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction is less than...

  10. DETERMINAN PERTUMBUHAN KREDIT MODAL KERJA PERBANKAN DI INDONESIA: Pendekatan Error Correction Model (ECM

    Directory of Open Access Journals (Sweden)

    Sasanti Widyawati

    2016-05-01

    Full Text Available AbstractBank loans has an important role in financing the national economy and driving force of economic growth.Therefore, credit growth must be balanced. However, the condition show that commercial bank credit growthslowed back.Using the method of Error Correction Model (ECM Domowitz - El Badawi, the study analyze theimpact of short-term and long-term independent variables to determine the credit growth in Indonesia financialsector. The results show that, in the short term only non performing loans are significant negative effect onthe working capital loans growth. For long-term, working capital loan interest rates have a significant negativeeffect, third party funds growth have a significant positive effect and inflation have a significant negativeeffect.

  11. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  12. On the roles of direct feedback and error field correction in stabilizing resistive-wall modes

    International Nuclear Information System (INIS)

    In, Y.; Bogatu, I.N.; Kim, J.S.; Garofalo, A.M.; Jackson, G.L.; La Haye, R.J.; Schaffer, M.J.; Strait, E.J.; Lanctot, M.J.; Reimerdes, H.; Marrelli, L.; Martin, P.; Okabayashi, M.

    2010-01-01

    Active feedback control in the DIII-D tokamak has fully stabilized the current-driven ideal kink resistive-wall mode (RWM). While complete stabilization is known to require both low frequency error field correction (EFC) and high frequency feedback, unambiguous identification has been made about the distinctive role of each in a fully feedback-stabilized discharge. Specifically, the role of direct RWM feedback, which nullifies the RWM perturbation in a time scale faster than the mode growth time, cannot be replaced by low frequency EFC, which minimizes the lack of axisymmetry of external magnetic fields. (letter)

  13. Attenuation correction in pulmonary and myocardial single photon emission computed tomography

    International Nuclear Information System (INIS)

    Almquist, H.

    2000-01-01

    The objective was to develop and validate methods for single photon emission computed tomography, SPECT, allowing quantitative physiologic and diagnostic studies of lung and heart. A method for correction of variable attenuation in SPECT, based on transmission measurements before administration of an isotope to the subject, was developed and evaluated. A protocol based upon geometrically well defined phantoms was developed. In a mosaic pattern phantom count rates were corrected from 39-43% to 101-110% of reference. In healthy subjects non-gravitational pulmonary perfusion gradients observed without attenuation correction were artefacts caused by attenuation. Pulmonary density in centre of right lung, obtained from the transmission measurement, was 0.28 ± 0.03 g/ml in normal subjects. Mean density was lower in large lungs compared to smaller ones. We also showed that regional ventilation/perfusion ratios could be measured with SPECT, using the readily available tracer 133 Xe. Because of the low energy of 133 Xe this relies heavily upon attenuation correction. A commercially available system for attenuation correction with simultaneous emission and transmission, considered to improve myocardial SPECT, performed erroneously. This could lead to clinical misjudgement. We considered that manufacturer-independent pre-clinical tests are required. In a test of two other commercial systems, based on different principles, an adapted variant of our initial protocol was proven useful. Only one of the systems provided correct emission count rates independently on phantom configuration. Errors in the other system were related to inadequate compensation of the influence of emission activity on the transmission study

  14. Optical correction of refractive error for preventing and treating eye symptoms in computer users.

    Science.gov (United States)

    Heus, Pauline; Verbeek, Jos H; Tikka, Christina

    2018-04-10

    Computer users frequently complain about problems with seeing and functioning of the eyes. Asthenopia is a term generally used to describe symptoms related to (prolonged) use of the eyes like ocular fatigue, headache, pain or aching around the eyes, and burning and itchiness of the eyelids. The prevalence of asthenopia during or after work on a computer ranges from 46.3% to 68.5%. Uncorrected or under-corrected refractive error can contribute to the development of asthenopia. A refractive error is an error in the focusing of light by the eye and can lead to reduced visual acuity. There are various possibilities for optical correction of refractive errors including eyeglasses, contact lenses and refractive surgery. To examine the evidence on the effectiveness, safety and applicability of optical correction of refractive error for reducing and preventing eye symptoms in computer users. We searched the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; Embase; Web of Science; and OSH update, all to 20 December 2017. Additionally, we searched trial registries and checked references of included studies. We included randomised controlled trials (RCTs) and quasi-randomised trials of interventions evaluating optical correction for computer workers with refractive error for preventing or treating asthenopia and their effect on health related quality of life. Two authors independently assessed study eligibility and risk of bias, and extracted data. Where appropriate, we combined studies in a meta-analysis. We included eight studies with 381 participants. Three were parallel group RCTs, three were cross-over RCTs and two were quasi-randomised cross-over trials. All studies evaluated eyeglasses, there were no studies that evaluated contact lenses or surgery. Seven studies evaluated computer glasses with at least one focal area for the distance of the computer screen with or without additional focal areas in presbyopic persons. Six studies compared computer

  15. Attenuation correction of myocardial SPECT images with X-ray CT. Effects of registration errors between X-ray CT and SPECT

    International Nuclear Information System (INIS)

    Takahashi, Yasuyuki; Murase, Kenya; Mochizuki, Teruhito; Motomura, Nobutoku

    2002-01-01

    Attenuation correction with an X-ray CT image is a new method to correct attenuation on SPECT imaging, but the effect of the registration errors between CT and SPECT images is unclear. In this study, we investigated the effects of the registration errors on myocardial SPECT, analyzing data from a phantom and a human volunteer. Registerion (fusion) of the X-ray CT and SPECT images was done with standard packaged software in three dimensional fashion, by using linked transaxial, coronal and sagittal images. In the phantom study, and X-ray CT image was shifted 1 to 3 pixels on the x, y and z axes, and rotated 6 degrees clockwise. Attenuation correction maps generated from each misaligned X-ray CT image were used to reconstruct misaligned SPECT images of the phantom filled with 201 Tl. In a human volunteer, X-ray CT was acquired in different conditions (during inspiration vs. expiration). CT values were transferred to an attenuation constant by using straight lines; an attenuation constant of 0/cm in the air (CT value=-1,000 HU) and that of 0.150/cm in water (CT value=0 HU). For comparison, attenuation correction with transmission CT (TCT) data and an external γ-ray source ( 99m Tc) was also applied to reconstruct SPECT images. Simulated breast attenuation with a breast attachment, and inferior wall attenuation were properly corrected by means of the attenuation correction map generated from X-ray CT. As pixel shift increased, deviation of the SPECT images increased in misaligned images in the phantom study. In the human study, SPECT images were affected by the scan conditions of the X-ray CT. Attenuation correction of myocardial SPECT with an X-ray CT image is a simple and potentially beneficial method for clinical use, but accurate registration of the X-ray CT to SPECT image is essential for satisfactory attenuation correction. (author)

  16. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  17. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  18. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  19. Characterizing a four-qubit planar lattice for arbitrary error detection

    Science.gov (United States)

    Chow, Jerry M.; Srinivasan, Srikanth J.; Magesan, Easwar; Córcoles, A. D.; Abraham, David W.; Gambetta, Jay M.; Steffen, Matthias

    2015-05-01

    Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015].

  20. Field correction for a one meter long permanent-magnet wiggler

    International Nuclear Information System (INIS)

    Fortgang, C.M.

    1992-01-01

    Field errors in wigglers are usually measured and corrected on-axis only, thus ignoring field error gradients. We find that gradient scale lengths are of the same order as electron beam size and therefore can be important. We report measurements of wiggler field errors in three dimensions and expansion of these errors out to first order (including two dipole and two quadrupole components). Conventional techniques for correcting on-axis errors (order zero) create new off-axis (first order) errors. We present a new approach to correcting wiggler fields out to first order. By correcting quadrupole errors in addition to the usual dipole correction, we minimize growth in electron beam size. Correction to first order yields better overlap between the electron and optical beams and should improve laser gain. (Author) 2 refs., 5 figs

  1. Packet reversed packet combining scheme

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2006-07-01

    The packet combining scheme is a well defined simple error correction scheme with erroneous copies at the receiver. It offers higher throughput combined with ARQ protocols in networks than that of basic ARQ protocols. But packet combining scheme fails to correct errors when the errors occur in the same bit locations of two erroneous copies. In the present work, we propose a scheme that will correct error if the errors occur at the same bit location of the erroneous copies. The proposed scheme when combined with ARQ protocol will offer higher throughput. (author)

  2. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    Science.gov (United States)

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  3. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  4. Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data

    Directory of Open Access Journals (Sweden)

    Jinhua Han

    2017-01-01

    Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.

  5. The use of concept maps to detect and correct concept errors (mistakes

    Directory of Open Access Journals (Sweden)

    Ladislada del Puy Molina Azcárate

    2013-02-01

    Full Text Available This work proposes to detect and correct concept errors (EECC to obtain Meaningful Learning (AS. The Conductive Model does not respond to the demand of meaningful learning that implies gathering thought, feeling and action to lead students up to both compromise and responsibility. In order to respond to the society competition about knowledge and information it is necessary to change the way of teaching and learning (from conductive model to constructive model. In this context it is important not only to learn meaningfully but also to create knowledge so as to developed dissertive, creative and critical thought, and the EECC are and obstacle to cope with this. This study tries to get ride of EECC in order to get meaningful learning. For this, it is essential to elaborate a Teaching Module (MI. This teaching Module implies the treatment of concept errors by a teacher able to change the dynamic of the group in the classroom. This M.I. was used among sixth grade primary school and first grade secondary school in some state assisted schools in the North of Argentina (Tucumán and Jujuy. After evaluation, the results showed great and positive changes among the experimental groups taking into account the attitude and the academic results. Meaningful Learning was shown through pupilʼs creativity, expressions and also their ability of putting this into practice into everyday life.

  6. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    Science.gov (United States)

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  7. Attenuation correction in pulmonary and myocardial single photon emission computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Almquist, H

    2000-01-01

    The objective was to develop and validate methods for single photon emission computed tomography, SPECT, allowing quantitative physiologic and diagnostic studies of lung and heart. A method for correction of variable attenuation in SPECT, based on transmission measurements before administration of an isotope to the subject, was developed and evaluated. A protocol based upon geometrically well defined phantoms was developed. In a mosaic pattern phantom count rates were corrected from 39-43% to 101-110% of reference. In healthy subjects non-gravitational pulmonary perfusion gradients observed without attenuation correctionwere artefacts caused by attenuation. Pulmonary density in centre of right lung, obtained from the transmission measurement, was 0.28 {+-} 0.03 g/ml in normal subjects. Mean density was lower in large lungs compared to smaller ones. We also showed that regional ventilation/perfusion ratios could be measured with SPECT, using the readily available tracer {sup 133}Xe. Because of the low energy of {sup 133}Xe this relies heavily upon attenuation correction. A commercially available system for attenuation correction with simultaneous emission and transmission, considered to improve myocardial SPECT, performed erroneously. This could lead to clinical misjudgement. We considered that manufacturer-independent pre-clinical tests are required. In a test of two other commercial systems, based on different principles, an adapted variant of our initial protocol was proven useful. Only one of the systems provided correct emission count rates independently on phantom configuration. Errors in the other system were related to inadequate compensation of the influence of emission activity on the transmission study.

  8. Analysis of limiting information characteristics of quantum-cryptography protocols

    International Nuclear Information System (INIS)

    Sych, D V; Grishanin, Boris A; Zadkov, Viktor N

    2005-01-01

    The problem of increasing the critical error rate of quantum-cryptography protocols by varying a set of letters in a quantum alphabet for space of a fixed dimensionality is studied. Quantum alphabets forming regular polyhedra on the Bloch sphere and the continual alphabet equally including all the quantum states are considered. It is shown that, in the absence of basis reconciliation, a protocol with the tetrahedral alphabet has the highest critical error rate among the protocols considered, while after the basis reconciliation, a protocol with the continual alphabet possesses the highest critical error rate. (quantum optics and quantum computation)

  9. Initial Results of Using Daily CT Localization to Correct Portal Error in Prostate Cancer

    International Nuclear Information System (INIS)

    Lattanzi, Joseph; McNeely, Shawn; Barnes, Scott; Das, Indra; Schultheiss, Timothy E; Hanks, Gerald E.

    1997-01-01

    Purpose: To evaluate the use of daily CT simulation in prostate cancer to correct errors in portal placement and organ motion. Improved localization with this technique should allow the reduction of target margins and facilitate dose escalation in high risk patients while minimizing the risk of normal tissue morbidity. Methods and Materials : Five patients underwent standard CT simulation with the alpha cradle cast, IV contrast, and urethrogram. All were initially treated to 46 Gy in a four field conformal technique which included the prostate, seminal vesicles and pelvic lymph nodes (GTV 1 ). The prostate or prostate and seminal vesicles (GTV 2 ) then received 56 Gy with a 1.0 cm margin to the PTV. At 50 Gy a second CT simulation was performed with IV contrast, urethrogram and the alpha cradle secured to a rigid sliding board. The prostate was contoured, a new isocenter generated, and surface markers placed. Prostate only treatment portals for the final conedown (GTV 3 ) were created with 0.25 cm isodose margins to the PTV. The final six fractions in 2 patients with favorable disease and eight fractions in 3 patients with unfavorable disease were delivered using the daily CT technique. On each treatment day the patient was placed in his cast on the sliding board and a CT scan performed. The daily isocenter was calculated in the A/P and lateral dimension and compared to the 50 Gy CT simulation isocenter. Couch and surface marker shifts were calculated to produce perfect portal alignment. To maintain positioning, the patient was transferred to a gurney while on the sliding board in his cast, transported to the treatment room and then transferred to the treatment couch. The patient was then treated to the corrected isocenter. Portal films and real time images were obtained for each portal. Results: Utilizing CT-CT image registration (fusion) of the daily and 50 Gy baseline CT scans the isocenter changes were quantified to reflect the contribution of positional

  10. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  11. Control of Human Error and comparison Level risk after correction action With the SHERPA Method in a control Room of petrochemical industry

    Directory of Open Access Journals (Sweden)

    A. Zakerian

    2011-12-01

    Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods.   Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars.   Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets.   Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors .   ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.

  12. Probability of error in information-hiding protocols

    NARCIS (Netherlands)

    Chatzikokolakis, K.; Palamidessi, C.; Panangaden, P.

    2007-01-01

    Randomized protocols for hiding private information can fruitfully be regarded as noisy channels in the information-theoretic sense, and the inference of the concealed information can be regarded as a hypothesis-testing problem. We consider the Bayesian approach to the problem, and investigate the

  13. Effects of MRI Protocol Parameters, Preload Injection Dose, Fractionation Strategies, and Leakage Correction Algorithms on the Fidelity of Dynamic-Susceptibility Contrast MRI Estimates of Relative Cerebral Blood Volume in Gliomas.

    Science.gov (United States)

    Leu, K; Boxerman, J L; Ellingson, B M

    2017-03-01

    DSC perfusion MR imaging assumes that the contrast agent remains intravascular; thus, disruptions in the blood-brain barrier common in brain tumors can lead to errors in the estimation of relative CBV. Acquisition strategies, including the choice of flip angle, TE, TR, and preload dose and incubation time, along with post hoc leakage-correction algorithms, have been proposed as means for combating these leakage effects. In the current study, we used DSC-MR imaging simulations to examine the influence of these various acquisition parameters and leakage-correction strategies on the faithful estimation of CBV. DSC-MR imaging simulations were performed in 250 tumors with perfusion characteristics randomly generated from the distributions of real tumor population data, and comparison of leakage-corrected CBV was performed with a theoretic curve with no permeability. Optimal strategies were determined by protocol with the lowest mean error. The following acquisition strategies (flip angle/TE/TR and contrast dose allocation for preload and bolus) produced high CBV fidelity, as measured by the percentage difference from a hypothetic tumor with no leakage: 1) 35°/35 ms/1.5 seconds with no preload and full dose for DSC-MR imaging, 2) 35°/25 ms/1.5 seconds with ¼ dose preload and ¾ dose bolus, 3) 60°/35 ms/2.0 seconds with ½ dose preload and ½ dose bolus, and 4) 60°/35 ms/1.0 second with 1 dose preload and 1 dose bolus. Results suggest that a variety of strategies can yield similarly high fidelity in CBV estimation, namely those that balance T1- and T2*-relaxation effects due to contrast agent extravasation. © 2017 by American Journal of Neuroradiology.

  14. Gaussian Error Correction of Quantum States in a Correlated Noisy Channel

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Berni, Adriano; Madsen, Lars Skovgaard

    2013-01-01

    Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated...... noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving...... the performance of various quantum information protocols....

  15. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error.

    Science.gov (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi

    2015-10-01

    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Using machine learning to speed up manual image annotation: application to a 3D imaging protocol for measuring single cell gene expression in the developing C. elegans embryo

    Directory of Open Access Journals (Sweden)

    Waterston Robert H

    2010-02-01

    Full Text Available Abstract Background Image analysis is an essential component in many biological experiments that study gene expression, cell cycle progression, and protein localization. A protocol for tracking the expression of individual C. elegans genes was developed that collects image samples of a developing embryo by 3-D time lapse microscopy. In this protocol, a program called StarryNite performs the automatic recognition of fluorescently labeled cells and traces their lineage. However, due to the amount of noise present in the data and due to the challenges introduced by increasing number of cells in later stages of development, this program is not error free. In the current version, the error correction (i.e., editing is performed manually using a graphical interface tool named AceTree, which is specifically developed for this task. For a single experiment, this manual annotation task takes several hours. Results In this paper, we reduce the time required to correct errors made by StarryNite. We target one of the most frequent error types (movements annotated as divisions and train a support vector machine (SVM classifier to decide whether a division call made by StarryNite is correct or not. We show, via cross-validation experiments on several benchmark data sets, that the SVM successfully identifies this type of error significantly. A new version of StarryNite that includes the trained SVM classifier is available at http://starrynite.sourceforge.net. Conclusions We demonstrate the utility of a machine learning approach to error annotation for StarryNite. In the process, we also provide some general methodologies for developing and validating a classifier with respect to a given pattern recognition task.

  17. Protocol Implementation Generator

    DEFF Research Database (Denmark)

    Carvalho Quaresma, Jose Nuno; Probst, Christian W.

    2010-01-01

    Users expect communication systems to guarantee, amongst others, privacy and integrity of their data. These can be ensured by using well-established protocols; the best protocol, however, is useless if not all parties involved in a communication have a correct implementation of the protocol and a...... Generator framework based on the LySatool and a translator from the LySa language into C or Java....... necessary tools. In this paper, we present the Protocol Implementation Generator (PiG), a framework that can be used to add protocol generation to protocol negotiation, or to easily share and implement new protocols throughout a network. PiG enables the sharing, verification, and translation...

  18. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  19. Quantum error correction with spins in diamond

    NARCIS (Netherlands)

    Cramer, J.

    2016-01-01

    Digital information based on the laws of quantum mechanics promisses powerful new ways of computation and communication. However, quantum information is very fragile; inevitable errors continuously build up and eventually all information is lost. Therefore, realistic large-scale quantum information

  20. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  1. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    International Nuclear Information System (INIS)

    Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin

    2009-01-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  2. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  3. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  5. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  6. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  7. Probabilistic Chain Teleportation of a Qutrit-State

    International Nuclear Information System (INIS)

    Wang Meiyu; Yan Fengli

    2010-01-01

    We investigate chain teleportation of a qutrit-state via the non-maximally two-qutrit entangled channels. For the case of four parties, the efficiencies of two chain teleportation protocols, the separate chain teleportation protocol (SCTP), and the global chain teleportation protocol (GCTP), are calculated. In SCTP the errors are corrected between every step while in GCTP the errors are corrected only at the end. Furthermore, we present a piecewise global chain teleportation protocol (PGCTP) for keeping away from the inconvenience of error-correction of GCTP. We show that PGCTP is more efficient than SCTP. (general)

  8. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  9. Mapping and correction of the CMM workspace error with the use of an electronic gyroscope and neural networks--practical application.

    Science.gov (United States)

    Swornowski, Pawel J

    2013-01-01

    The article presents the application of neural networks in determining and correction of the deformation of a coordinate measuring machine (CMM) workspace. The information about the CMM errors is acquired using an ADXRS401 electronic gyroscope. A test device (PS-20 module) was built and integrated with a commercial measurement system based on the SP25M passive scanning probe and with a PH10M module (Renishaw). The proposed solution was tested on a Kemco 600 CMM and on a DEA Global Clima CMM. In the former case, correction of the CMM errors was performed using the source code of WinIOS software owned by The Institute of Advanced Manufacturing Technology, Cracow, Poland and in the latter on an external PC. Optimum parameters of full and simplified mapping of a given layer of the CMM workspace were determined for practical applications. The proposed method can be employed for the interim check (ISO 10360-2 procedure) or to detect local CMM deformations, occurring when the CMM works at high scanning speeds (>20 mm/s). © Wiley Periodicals, Inc.

  10. Efficacy of surface error corrections to density functional theory calculations of vacancy formation energy in transition metals.

    Science.gov (United States)

    Nandi, Prithwish Kumar; Valsakumar, M C; Chandra, Sharat; Sahu, H K; Sundar, C S

    2010-09-01

    We calculate properties like equilibrium lattice parameter, bulk modulus and monovacancy formation energy for nickel (Ni), iron (Fe) and chromium (Cr) using Kohn-Sham density functional theory (DFT). We compare the relative performance of local density approximation (LDA) and generalized gradient approximation (GGA) for predicting such physical properties for these metals. We also make a relative study between two different flavors of GGA exchange correlation functional, namely PW91 and PBE. These calculations show that there is a discrepancy between DFT calculations and experimental data. In order to understand this discrepancy in the calculation of vacancy formation energy, we introduce a correction for the surface intrinsic error corresponding to an exchange correlation functional using the scheme implemented by Mattsson et al (2006 Phys. Rev. B 73 195123) and compare the effectiveness of the correction scheme for Al and the 3d transition metals.

  11. On the Limitations of Variational Bias Correction

    Science.gov (United States)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  12. The effect of errors in charged particle beams

    International Nuclear Information System (INIS)

    Carey, D.C.

    1987-01-01

    Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed

  13. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    Science.gov (United States)

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Reduction of the elevator illusion from continued hypergravity exposure and visual error-corrective feedback

    Science.gov (United States)

    Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.

    1996-01-01

    Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.

  15. Laser tracker error determination using a network measurement

    International Nuclear Information System (INIS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-01-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies

  16. The benefit of generating errors during learning.

    Science.gov (United States)

    Potts, Rosalind; Shanks, David R

    2014-04-01

    Testing has been found to be a powerful learning tool, but educators might be reluctant to make full use of its benefits for fear that any errors made would be harmful to learning. We asked whether testing could be beneficial to memory even during novel learning, when nearly all responses were errors, and where errors were unlikely to be related to either cues or targets. In 4 experiments, participants learned definitions for unfamiliar English words, or translations for foreign vocabulary, by generating a response and being given corrective feedback, by reading the word and its definition or translation, or by selecting from a choice of definitions or translations followed by feedback. In a final test of all words, generating errors followed by feedback led to significantly better memory for the correct definition or translation than either reading or making incorrect choices, suggesting that the benefits of generation are not restricted to correctly generated items. Even when information to be learned is novel, errorful generation may play a powerful role in potentiating encoding of corrective feedback. Experiments 2A, 2B, and 3 revealed, via metacognitive judgments of learning, that participants are strikingly unaware of this benefit, judging errorful generation to be a less effective encoding method than reading or incorrect choosing, when in fact it was better. Predictions reflected participants' subjective experience during learning. If subjective difficulty leads to more effort at encoding, this could at least partly explain the errorful generation advantage.

  17. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    Science.gov (United States)

    Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.

    2018-02-01

    Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  18. Measurements on pointing error and field of view of Cimel-318 Sun photometers in the scope of AERONET

    Directory of Open Access Journals (Sweden)

    B. Torres

    2013-08-01

    Full Text Available Sensitivity studies indicate that among the diverse error sources of ground-based sky radiometer observations, the pointing error plays an important role in the correct retrieval of aerosol properties. The accurate pointing is specially critical for the characterization of desert dust aerosol. The present work relies on the analysis of two new measurement procedures (cross and matrix specifically designed for the evaluation of the pointing error in the standard instrument of the Aerosol Robotic Network (AERONET, the Cimel CE-318 Sun photometer. The first part of the analysis contains a preliminary study whose results conclude on the need of a Sun movement correction for an accurate evaluation of the pointing error from both new measurements. Once this correction is applied, both measurements show equivalent results with differences under 0.01° in the pointing error estimations. The second part of the analysis includes the incorporation of the cross procedure in the AERONET routine measurement protocol in order to monitor the pointing error in field instruments. The pointing error was evaluated using the data collected for more than a year, in 7 Sun photometers belonging to AERONET sites. The registered pointing error values were generally smaller than 0.1°, though in some instruments values up to 0.3° have been observed. Moreover, the pointing error analysis shows that this measurement can be useful to detect mechanical problems in the robots or dirtiness in the 4-quadrant detector used to track the Sun. Specifically, these mechanical faults can be detected due to the stable behavior of the values over time and vs. the solar zenith angle. Finally, the matrix procedure can be used to derive the value of the solid view angle of the instruments. The methodology has been implemented and applied for the characterization of 5 Sun photometers. To validate the method, a comparison with solid angles obtained from the vicarious calibration method was

  19. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    Science.gov (United States)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  20. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    Science.gov (United States)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-01-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  1. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  2. IDENTIFYING BANK LENDING CHANNEL IN INDONESIA: A VECTOR ERROR CORRECTION APPROACH WITH STRUCTURAL BREAK

    Directory of Open Access Journals (Sweden)

    Akhsyim Afandi

    2017-03-01

    Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.

  3. Simulation of a MR–PET protocol for staging of head-and-neck cancer including Dixon MR for attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Eiber, Matthias, E-mail: matthias.eiber@tum.de [Department of Radiology, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); Souvatzoglou, Michael, E-mail: msouvatz@yahoo.de [Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); Pickhard, Anja, E-mail: a.pickhard@lrz.tum.de [Department of Otorhinolaryngology, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); Loeffelbein, Denys J., E-mail: denys.loeffelbein@gmx.de [Department of Maxillofacial Surgery, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); Knopf, Andreas, E-mail: andreas.knopf@tum.de [Department of Otorhinolaryngology, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); Holzapfel, Konstantin, E-mail: holzapfel@roe.med.tum.de [Department of Radiology, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); Martinez-Möller, Axel, E-mail: a.martinez-moller@lrz.tu-muenchen.de [Department of Nuclear Medicine, Klinikum rechts der Isar, Technische Universität München, Ismaninger Str. 22, 81675 Munich (Germany); and others

    2012-10-15

    Purpose: To simulate and optimize a MR protocol for squamous cell cancer of the head and neck (HNSCC) patients for potential future use in an integrated whole-body MR–PET scanner. Materials and methods: On a clinical 3T scanner, which is the basis for a recently introduced fully integrated whole-body MR–PET, 20 patients with untreated HNSCC routinely staged with 18F-FDG PET/CT underwent a dedicated MR protocol for the neck. Moreover, a whole-body Dixon MR-sequence was applied, which is used for attenuation correction on a recently introduced hybrid MR–PET scanner. In a subset of patients volume-interpolated-breathhold (VIBE) T1w-sequences for lungs and liver were added. Total imaging time was analyzed for both groups. The quality of the delineation of the primary tumor (scale 0–3) and the presence or absence of lymph node metastases (scale 1–5) was evaluated for CT, MR, PET/CT and a combination of MR and PET to ensure that the MR–PET fusion does not cause a loss of diagnostic capability. PET was used to identify distant metastases. The PET dataset for simulated MR/PET was based on a segmentation of the CT data into 4 classes according to the approach of the Dixon MR-sequence for MR–PET. Standard of reference was histopathology in 19 cases. In one case no histopathological confirmation of a primary tumor could be achieved. Results: Mean imaging time was 35:17 min (range: 31:08–42:42 min) for the protocol including sequences for local staging and attenuation correction and 44:17 min (range: 35:44–54:58) for the extended protocol. Although not statistically significant a combination of MR and PET performed better in the delineation of the primary tumor (mean 2.20) compared to CT (mean 1.40), MR (1.95) and PET/CT (2.15) especially in patients with dental implants. PET/CT and combining MR and PET performed slightly better than CT and MR for the assessment of lymph node metastases. Two patients with distant metastases were only identified by PET

  4. Exploiting the Error-Correcting Capabilities of Low Density Parity Check Codes in Distributed Video Coding using Optical Flow

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo

    2012-01-01

    We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...

  5. An Energy-Efficient Link Layer Protocol for Reliable Transmission over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Iqbal Adnan

    2009-01-01

    Full Text Available In multihop wireless networks, hop-by-hop reliability is generally achieved through positive acknowledgments at the MAC layer. However, positive acknowledgments introduce significant energy inefficiencies on battery-constrained devices. This inefficiency becomes particularly significant on high error rate channels. We propose to reduce the energy consumption during retransmissions using a novel protocol that localizes bit-errors at the MAC layer. The proposed protocol, referred to as Selective Retransmission using Virtual Fragmentation (SRVF, requires simple modifications to the positive-ACK-based reliability mechanism but provides substantial improvements in energy efficiency. The main premise of the protocol is to localize bit-errors by performing partial checksums on disjoint parts or virtual fragments of a packet. In case of error, only the corrupted virtual fragments are retransmitted. We develop stochastic models of the Simple Positive-ACK-based reliability, the previously-proposed Packet Length Optimization (PLO protocol, and the SRVF protocol operating over an arbitrary-order Markov wireless channel. Our analytical models show that SRVF provides significant theoretical improvements in energy efficiency over existing protocols. We then use bit-error traces collected over different real networks to empirically compare the proposed and existing protocols. These experimental results further substantiate that SRVF provides considerably better energy efficiency than Simple Positive-ACK and Packet Length Optimization protocols.

  6. Initialization Errors in Quantum Data Base Recall

    OpenAIRE

    Natu, Kalyani

    2016-01-01

    This paper analyzes the relationship between initialization error and recall of a specific memory in the Grover algorithm for quantum database search. It is shown that the correct memory is obtained with high probability even when the initial state is far removed from the correct one. The analysis is done by relating the variance of error in the initial state to the recovery of the correct memory and the surprising result is obtained that the relationship between the two is essentially linear.

  7. Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc

    International Nuclear Information System (INIS)

    Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.

    1983-11-01

    Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10 30 cm -2 sec -1 requires focusing the interaction bunches to a spot size in the micrometer (μm) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables

  8. Publisher Correction: Invisible Trojan-horse attack

    DEFF Research Database (Denmark)

    Sajeed, Shihan; Minshull, Carter; Jain, Nitin

    2017-01-01

    A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper.......A correction to this article has been published and is linked from the HTML version of this paper. The error has been fixed in the paper....

  9. Volume of eggs in the clutches of Grass snake Natrix natrix and Dice snake N. tessellata: error correction

    Directory of Open Access Journals (Sweden)

    Klenina Anastasiya Aleksandrovna

    2015-12-01

    Full Text Available The authors have made a mistake in calculating the volume of eggs in the clutches of snake family Natrix. In this article we correct the error. As a result, it was revealed, that the volume of eggs positively correlates with a female length and its mass, as well as with the quantity of eggs in the clutches. There is a positive correlation between the characteristics of newborn snakes (length and mass and the volume of eggs, from which they hatched.

  10. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  11. Partial correction of a severe molecular defect in hemophilia A, because of errors during expression of the factor VIII gene

    Energy Technology Data Exchange (ETDEWEB)

    Young, M.; Antonarakis, S.E. [Univ. of Geneva (Switzerland); Inaba, Hiroshi [Tokyo Medical College (Japan)] [and others

    1997-03-01

    Although the molecular defect in patients in a Japanese family with mild to moderately severe hemophilia A was a deletion of a single nucleotide T within an A{sub 8}TA{sub 2} sequence of exon 14 of the factor VIII gene, the severity of the clinical phenotype did not correspond to that expected of a frameshift mutation. A small amount of functional factor VIII protein was detected in the patient`s plasma. Analysis of DNA and RNA molecules from normal and affected individuals and in vitro transcription/translation suggested a partial correction of the molecular defect, because of the following: (i) DNA replication/RNA transcription errors resulting in restoration of the reading frame and/or (ii) {open_quotes}ribosomal frameshifting{close_quotes} resulting in the production of normal factor VIII polypeptide and, thus, in a milder than expected hemophilia A. All of these mechanisms probably were promoted by the longer run of adenines, A{sub 10} instead of A{sub 8}TA{sub 2}, after the delT. Errors in the complex steps of gene expression therefore may partially correct a severe frameshift defect and ameliorate an expected severe phenotype. 36 refs., 6 figs.

  12. Errors and complications in surgical treatment of non-stable equino-plano-valgus foot deformity in patients with cerebral palsy, with use of the calcaneus correcting osteotomy technique

    Directory of Open Access Journals (Sweden)

    Valery V. Umnov

    2017-03-01

    Full Text Available Aims. To examine the results of treatment for patients with a non-stable form of equino-plano-valgus foot deformity in cerebral palsy with the use of corrective osteotomy of the calcaneus. To further analyze the errors and complications that occurred in patients treated with this technique. Materials and methods. From 2006 to 2014, 64 patients (103 feet aged 3 to 17 years were operated using the described method of calcaneus correcting osteotomy. The equinus contracture was eliminated by transection of the gastrocnemius muscle tendon and extending achilloplastic surgery. The abnormal muscle tone was reduced either by administering the drug Dysport into the gastrocnemius muscle or by selective neurotomy of the tibial nerve. Results. The analysis revealed that there were good results for 75%, satisfactory results for 18%, and unacceptable results for 7% of patients. The unacceptable results of treatment were due to several technical and tactical errors, which were grouped and analyzed. Conclusion. The analysis of errors and complications of calcaneus corrective osteotomy for patients with cerebral palsy with a mobile form of talipes equinoplanovalgus will enable their future avoidance and improvement of the treatment quality.

  13. ERRORS AND COMPLICATIONS IN SURGICAL TREATMENT OF NON-STABLE EQUINO-PLANO-VALGUS FOOT DEFORMITY IN PATIENTS WITH CEREBRAL PALSY, WITH USE OF THE CALCANEUS CORRECTING OSTEOTOMY TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Valery V. Umnov

    2017-03-01

    Full Text Available Aims. To examine the results of treatment for patients with a non-stable form of equino-plano-valgus foot deformity in cerebral palsy with the use of corrective osteotomy of the calcaneus. To further analyze the errors and complications that occurred in patients treated with this technique. Materials and methods. From 2006 to 2014, 64 patients (103 feet aged 3 to 17 years were operated using the described method of calcaneus correcting osteotomy. The equinus contracture was eliminated by transection of the gastrocnemius muscle tendon and extending achilloplastic surgery. The abnormal muscle tone was reduced either by administering the drug Dysport into the gastrocnemius muscle or by selective neurotomy of the tibial nerve. Results. The analysis revealed that there were good results for 75%, satisfactory results for 18%, and unacceptable results for 7% of patients. The unacceptable results of treatment were due to several technical and tactical errors, which were grouped and analyzed. Conclusion. The analysis of errors and complications of calcaneus corrective osteotomy for patients with cerebral palsy with a mobile form of talipes equinoplanovalgus will enable their future avoidance and improvement of the treatment quality.

  14. Calculation and Analysis of Differential Corrections for BeiDou

    Science.gov (United States)

    Yang, Sainan; Chen, Junping; Zhang, Yize

    2015-04-01

    BeiDou Satellite Navigation System has been providing service forAsia-Pacific area. BeiDou uses observations of regional monitoring network to determine satellite orbit, which limits the satellite orbit accuracy. And the satellite clock error is produced by time synchronization system. The time synchronization delay of antenna device is general obtained through prior Calibration, and the residual calibration error is included in the satellite clock, which affects the prediction accuracy of satellite clock error. In this paper, we study the algorithms of Beidou differential corrections to improve the accuracy of satellite signals to improve the user positioning accuracy. In this algorithm, both pseudo-range and phase observations are used to calculate differential corrections. We process pseudo-range observations to obtain equivalent satellite clock error, which include satellite clock errors and orbit radial errors, as well as the average projection of orbit tangential and normal errors in combination. And the epoch-difference of phase observations are processed to eliminate the ambiguity which simplifies algorithms and ensure the relative accuracy (corrections variety between the epochs). Observations more than 10 stations in China are processed, and the equivalent clock error calculation results are analyzed, which shows that the satellite UDRE are significantly reduced and user location accuracy improves when the equivalent clock error corrections are applied. The residuals deducting equivalent satellite clock error contains the projection difference of satellite orbit error in all station (tangential and normal errors are main). We utilize the residuals to solve the tangential and normal orbit errors which cause the projection difference. The same observation data is processed. The results show that after calculating three-dimensional corrections, the satellite UDRE doesn't improve significantly compared to equivalent satellite clock error corrections and user

  15. Sextupole correction for a ring with large chromaticity and the influence of magnetic errors on its parameters

    International Nuclear Information System (INIS)

    Kamiya, Y.; Katoh, M.; Honjo, I.

    1987-01-01

    A future ring with a low emittance and large circumference, specifically dedicated to a synchrotron light source, will have a large chromaticity, so that it is important to employ a sophisticated sextupole correction as well as the design of linear lattice to obtain the stable beam. The authors tried a method of sextupole correction for a lattice with a large chromaticity and small dispersion function. In such a lattice the sextupole magnets are obliged to become large in strength to compensate the chromaticity. Then the nonlinear effects of the sextupole magnets will become more serious than their chromatic effects. Furthermore, a ring with strong quadrupole magnets to get a very small emittance and with strong sextupole magnets to compensate the generated chromaticity will be very sensitive to their magnetic errors. The authors also present simple formulae to evaluate the effects on the beam parameters. The details will appear in a KEK Report

  16. The Bouguer Correction Algorithm for Gravity with Limited Range

    OpenAIRE

    MA Jian; WEI Ziqing; WU Lili; YANG Zhenghui

    2017-01-01

    The Bouguer correction is an important item in gravity reduction, while the traditional Bouguer correction, whether the plane Bouguer correction or the spherical Bouguer correction, exists approximation error because of far-zone virtual terrain. The error grows as the calculation point gets higher. Therefore gravity reduction using the Bouguer correction with limited range, which was in accordance with the scope of the topographic correction, was researched in this paper. After that, a simpli...

  17. Extending Lifetime of Wireless Sensor Networks using Forward Error Correction

    DEFF Research Database (Denmark)

    Donapudi, S U; Obel, C O; Madsen, Jan

    2006-01-01

    Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...

  18. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. WISC-R Examiner Errors: Cause for Concern.

    Science.gov (United States)

    Slate, John R.; Chick, David

    1989-01-01

    Clinical psychology graduate students (N=14) administered Wechsler Intelligence Scale for Children-Revised. Found numerous scoring and mechanical errors that influenced full-scale intelligence quotient scores on two-thirds of protocols. Particularly prone to error were Verbal subtests of Vocabulary, Comprehension, and Similarities. Noted specific…

  20. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  1. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  2. The dynamic effect of exchange-rate volatility on Turkish exports: Parsimonious error-correction model approach

    Directory of Open Access Journals (Sweden)

    Demirhan Erdal

    2015-01-01

    Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.

  3. Ironic Effects of Drawing Attention to Story Errors

    Science.gov (United States)

    Eslick, Andrea N.; Fazio, Lisa K.; Marsh, Elizabeth J.

    2014-01-01

    Readers learn errors embedded in fictional stories and use them to answer later general knowledge questions (Marsh, Meade, & Roediger, 2003). Suggestibility is robust and occurs even when story errors contradict well-known facts. The current study evaluated whether suggestibility is linked to participants’ inability to judge story content as correct versus incorrect. Specifically, participants read stories containing correct and misleading information about the world; some information was familiar (making error discovery possible), while some was more obscure. To improve participants’ monitoring ability, we highlighted (in red font) a subset of story phrases requiring evaluation; readers no longer needed to find factual information. Rather, they simply needed to evaluate its correctness. Readers were more likely to answer questions with story errors if they were highlighted in red font, even if they contradicted well-known facts. Though highlighting to-be-evaluated information freed cognitive resources for monitoring, an ironic effect occurred: Drawing attention to specific errors increased rather than decreased later suggestibility. Failure to monitor for errors, not failure to identify the information requiring evaluation, leads to suggestibility. PMID:21294039

  4. Correcting the error in neutron moisture probe measurements caused by a water density gradient

    International Nuclear Information System (INIS)

    Wilson, D.J.

    1988-01-01

    If a neutron probe lies in or near a water density gradient, the probe may register a water density different to that at the measuring point. The effect of a thin stratum of soil containing an excess or depletion of water at various distances from a probe in an otherwise homogeneous system has been calculated, producing an 'importance' curve. The effect of these strata can be integrated over the soil region in close proximity to the probe resulting in the net effect of the presence of a water density gradient. In practice, the probe is scanned through the point of interest and the count rate at that point is corrected for the influence of the water density on each side of it. An example shows that the technique can reduce an error of 10 per cent to about 2 per cent

  5. Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm

    Directory of Open Access Journals (Sweden)

    J. I. Colless

    2018-02-01

    Full Text Available Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE, leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H_{2} molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.

  6. Making related errors facilitates learning, but learners do not know it.

    Science.gov (United States)

    Huelser, Barbie J; Metcalfe, Janet

    2012-05-01

    Producing an error, so long as it is followed by corrective feedback, has been shown to result in better retention of the correct answers than does simply studying the correct answers from the outset. The reasons for this surprising finding, however, have not been investigated. Our hypothesis was that the effect might occur only when the errors produced were related to the targeted correct response. In Experiment 1, participants studied either related or unrelated word pairs, manipulated between participants. Participants either were given the cue and target to study for 5 or 10 s or generated an error in response to the cue for the first 5 s before receiving the correct answer for the final 5 s. When the cues and targets were related, error-generation led to the highest correct retention. However, consistent with the hypothesis, no benefit was derived from generating an error when the cue and target were unrelated. Latent semantic analysis revealed that the errors generated in the related condition were related to the target, whereas they were not related to the target in the unrelated condition. Experiment 2 replicated these findings in a within-participants design. We found, additionally, that people did not know that generating an error enhanced memory, even after they had just completed the task that produced substantial benefits.

  7. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Volume 2; Revised

    Science.gov (United States)

    Mueller, James L. (Editor); Fargion, Giulietta S. (Editor); Trees, C.; Austin, R. W.; Pietras, C. (Editor); Hooker, S.; Holben, B.; McClain, Charles R.; Clark, D. K.; Yuen, M.

    2002-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the SIMBIOS Project. It supersedes the earlier version, and is organized into four parts: Introductory Background, Instrument Characteristics, Field Measurements and Data Analysis, Data Reporting and Archival. Changes in this revision include the addition of three new chapters: (1) Fundamental Definitions, Relationships and Conventions; (2) MOBY, A Radiometric Buoy for Performance Monitoring and Vicarious Calibration of Satellite Ocean Color Sensors: Measurement and Data Analysis Protocols; and (3) Normalized Water-Leaving Radiance and Remote Sensing Reflectance: Bidirectional Reflectance and Other Factors. Although the present document represents another significant, incremental improvement in the ocean optics protocols, there are several protocols that have either been overtaken by recent technological progress, or have been otherwise identified as inadequate. Revision 4 is scheduled for completion sometime in 2003. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational Project. The contributions are published as submitted, after only minor editing to correct obvious grammatical or clerical errors.

  8. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Volume 1; Revised

    Science.gov (United States)

    Mueller, James L. (Editor); Fargion, Giulietta (Editor); Mueller, J. L.; Trees, C.; Austin, R. W.; Pietras, C.; Hooker, S.; Holben, B.; McClain, Charles R.; Clark, D. K.; hide

    2002-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the SIMBIOS Project. It supersedes the earlier version, and is organized into four parts: Introductory Background, Instrument Characteristics, Field Measurements and Data Analysis, Data Reporting and Archival. Changes in this revision include the addition of three new chapters: (1) Fundamental Definitions, Relationships and Conventions; (2) MOBY, A Radiometric Buoy for Performance Monitoring and Vicarious Calibration of Satellite Ocean Color Sensors: Measurement and Data Analysis Protocols; and (3) Normalized Water-Leaving Radiance and Remote Sensing Reflectance: Bidirectional Reflectance and Other Factors. Although the present document represents another significant, incremental improvement in the ocean optics protocols, there are several protocols that have either been overtaken by recent technological progress, or have been otherwise identified as inadequate. Revision 4 is scheduled for completion sometime in 2003. This technical report is not meant as a substitute for scientific literature. Instead, it will provide a ready and responsive vehicle for the multitude of technical reports issued by an operational Project. The contributions are published as submitted, after only minor editing to correct obvious grammatical or clerical errors.

  9. A two-phase sampling survey for nonresponse and its paradata to correct nonresponse bias in a health surveillance survey.

    Science.gov (United States)

    Santin, G; Bénézet, L; Geoffroy-Perez, B; Bouyer, J; Guéguen, A

    2017-02-01

    The decline in participation rates in surveys, including epidemiological surveillance surveys, has become a real concern since it may increase nonresponse bias. The aim of this study is to estimate the contribution of a complementary survey among a subsample of nonrespondents, and the additional contribution of paradata in correcting for nonresponse bias in an occupational health surveillance survey. In 2010, 10,000 workers were randomly selected and sent a postal questionnaire. Sociodemographic data were available for the whole sample. After data collection of the questionnaires, a complementary survey among a random subsample of 500 nonrespondents was performed using a questionnaire administered by an interviewer. Paradata were collected for the complete subsample of the complementary survey. Nonresponse bias in the initial sample and in the combined samples were assessed using variables from administrative databases available for the whole sample, not subject to differential measurement errors. Corrected prevalences by reweighting technique were estimated by first using the initial survey alone and then the initial and complementary surveys combined, under several assumptions regarding the missing data process. Results were compared by computing relative errors. The response rates of the initial and complementary surveys were 23.6% and 62.6%, respectively. For the initial and the combined surveys, the relative errors decreased after correction for nonresponse on sociodemographic variables. For the combined surveys without paradata, relative errors decreased compared with the initial survey. The contribution of the paradata was weak. When a complex descriptive survey has a low response rate, a short complementary survey among nonrespondents with a protocol which aims to maximize the response rates, is useful. The contribution of sociodemographic variables in correcting for nonresponse bias is important whereas the additional contribution of paradata in

  10. Errors as a Means of Reducing Impulsive Food Choice.

    Science.gov (United States)

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  11. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  12. Prevalence of refractive errors in the European adult population: the Gutenberg Health Study (GHS).

    Science.gov (United States)

    Wolfram, Christian; Höhn, René; Kottler, Ulrike; Wild, Philipp; Blettner, Maria; Bühren, Jens; Pfeiffer, Norbert; Mirshahi, Alireza

    2014-07-01

    To study the distribution of refractive errors among adults of European descent. Population-based eye study in Germany with 15010 participants aged 35-74 years. The study participants underwent a detailed ophthalmic examination according to a standardised protocol. Refractive error was determined by an automatic refraction device (Humphrey HARK 599) without cycloplegia. Definitions for the analysis were myopia +0.5 D, astigmatism >0.5 cylinder D and anisometropia >1.0 D difference in the spherical equivalent between the eyes. Exclusion criterion was previous cataract or refractive surgery. 13959 subjects were eligible. Refractive errors ranged from -21.5 to +13.88 D. Myopia was present in 35.1% of this study sample, hyperopia in 31.8%, astigmatism in 32.3% and anisometropia in 13.5%. The prevalence of myopia decreased, while the prevalence of hyperopia, astigmatism and anisometropia increased with age. 3.5% of the study sample had no refractive correction for their ametropia. Refractive errors affect the majority of the population. The Gutenberg Health Study sample contains more myopes than other study cohorts in adult populations. Our findings do not support the hypothesis of a generally lower prevalence of myopia among adults in Europe as compared with East Asia.

  13. Epistemic Protocols for Distributed Gossiping

    Directory of Open Access Journals (Sweden)

    Krzysztof R. Apt

    2016-06-01

    Full Text Available Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other's secrets. We consider distributed gossip protocols which are expressed by means of epistemic logic. We provide an operational semantics of such protocols and set up an appropriate framework to argue about their correctness. Then we analyze specific protocols for complete graphs and for directed rings.

  14. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    Energy Technology Data Exchange (ETDEWEB)

    Moss, A.R.L

    2000-07-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of {+-} 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of {+-}0.1%T is required to give a UPF uncertainty of {+-}2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  15. Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics

    International Nuclear Information System (INIS)

    Moss, A.R.L.

    2000-01-01

    Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)

  16. Fast shading correction for cone beam CT in radiation therapy via sparse sampling on planning CT.

    Science.gov (United States)

    Shi, Linxi; Tsui, Tiffany; Wei, Jikun; Zhu, Lei

    2017-05-01

    avoids false structures in the corrected CBCT even when the maximum registration error is as high as 8 mm. We develop an effective shading correction algorithm for CBCT readily implementable on clinical data as a software plug-in without modifications of current imaging hardware and protocol. The algorithm is directly applied on the output images from a commercial CBCT scanner with high computational efficiency and negligible memory burden. © 2017 American Association of Physicists in Medicine.

  17. A Conceptual Design Study for the Error Field Correction Coil Power Supply in JT-60SA

    International Nuclear Information System (INIS)

    Matsukawa, M.; Shimada, K.; Yamauchi, K.; Gaio, E.; Ferro, A.; Novello, L.

    2013-01-01

    This paper describes a conceptual design study for the circuit configuration of the Error Field Correction Coil (EFCC) power supply (PS) to maximize the expected performance with reasonable cost in JT-60SA. The EFCC consists of eighteen sector coils installed inside the vacuum vessel, six in the toroidal direction and three in the poloidal direction, each one rated for 30 kA-turn. As a result, star point connection is proposed for each group of six EFCC coils installed cyclically in the toroidal direction for decoupling with poloidal field coils. In addition, a six phase inverter which is capable of controlling each phase current was chosen as PS topology to ensure higher flexibility of operation with reasonable cost.

  18. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  19. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    Science.gov (United States)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  20. SimCommSys: taking the errors out of error-correcting code simulations

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.