Quantum error correction for beginners
International Nuclear Information System (INIS)
Devitt, Simon J; Nemoto, Kae; Munro, William J
2013-01-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Iterative optimization of quantum error correcting codes
International Nuclear Information System (INIS)
Reimpell, M.; Werner, R.F.
2005-01-01
We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step
Tensor Networks and Quantum Error Correction
Ferris, Andrew J.; Poulin, David
2014-07-01
We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.
Open quantum systems and error correction
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation
International Nuclear Information System (INIS)
Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.
2003-01-01
The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm
Quantum Information Processing and Quantum Error Correction An Engineering Approach
Djordjevic, Ivan
2012-01-01
Quantum Information Processing and Quantum Error Correction is a self-contained, tutorial-based introduction to quantum information, quantum computation, and quantum error-correction. Assuming no knowledge of quantum mechanics and written at an intuitive level suitable for the engineer, the book gives all the essential principles needed to design and implement quantum electronic and photonic circuits. Numerous examples from a wide area of application are given to show how the principles can be implemented in practice. This book is ideal for the electronics, photonics and computer engineer
Black Holes, Holography, and Quantum Error Correction
CERN. Geneva
2017-01-01
How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions? How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator? Why do such things happen only in gravitational theories? In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence. No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.
Quantum Error Correction and Fault Tolerant Quantum Computing
Gaitan, Frank
2008-01-01
It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo
Experimental quantum error correction with high fidelity
International Nuclear Information System (INIS)
Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond
2011-01-01
More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ε to ∼ε 2 . In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.
Quantum secret sharing based on quantum error-correcting codes
International Nuclear Information System (INIS)
Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu
2011-01-01
Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)
Quantum algorithms and quantum maps - implementation and error correction
International Nuclear Information System (INIS)
Alber, G.; Shepelyansky, D.
2005-01-01
Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)
Quantum error correction with spins in diamond
Cramer, J.
2016-01-01
Digital information based on the laws of quantum mechanics promisses powerful new ways of computation and communication. However, quantum information is very fragile; inevitable errors continuously build up and eventually all information is lost. Therefore, realistic large-scale quantum information
Autonomous Quantum Error Correction with Application to Quantum Metrology
Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.
2017-04-01
We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Operator quantum error-correcting subsystems for self-correcting quantum memories
International Nuclear Information System (INIS)
Bacon, Dave
2006-01-01
The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures
Topics in quantum cryptography, quantum error correction, and channel simulation
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
Unitary Application of the Quantum Error Correction Codes
International Nuclear Information System (INIS)
You Bo; Xu Ke; Wu Xiaohua
2012-01-01
For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
NP-hardness of decoding quantum error-correction codes
International Nuclear Information System (INIS)
Hsieh, Min-Hsiu; Le Gall, Francois
2011-01-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Entanglement and Quantum Error Correction with Superconducting Qubits
Reed, Matthew
2015-03-01
Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.
Entanglement renormalization, quantum error correction, and bulk causality
Energy Technology Data Exchange (ETDEWEB)
Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2017-04-07
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
Quantum error-correcting code for ternary logic
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
Black Hole Entanglement and Quantum Error Correction
Verlinde, E.; Verlinde, H.
2013-01-01
It was recently argued in [1] that black hole complementarity strains the basic rules of quantum information theory, such as monogamy of entanglement. Motivated by this argument, we develop a practical framework for describing black hole evaporation via unitary time evolution, based on a holographic
Error Correction for Non-Abelian Topological Quantum Computation
Directory of Open Access Journals (Sweden)
James R. Wootton
2014-03-01
Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Continuous quantum error correction for non-Markovian decoherence
International Nuclear Information System (INIS)
Oreshkov, Ognyan; Brun, Todd A.
2007-01-01
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics
Neural network decoder for quantum error correcting codes
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
Design of nanophotonic circuits for autonomous subsystem quantum error correction
Energy Technology Data Exchange (ETDEWEB)
Kerckhoff, J; Pavlichin, D S; Chalabi, H; Mabuchi, H, E-mail: jkerc@stanford.edu [Edward L Ginzton Laboratory, Stanford University, Stanford, CA 94305 (United States)
2011-05-15
We reapply our approach to designing nanophotonic quantum memories in order to formulate an optical network that autonomously protects a single logical qubit against arbitrary single-qubit errors. Emulating the nine-qubit Bacon-Shor subsystem code, the network replaces the traditionally discrete syndrome measurement and correction steps by continuous, time-independent optical interactions and coherent feedback of unitarily processed optical fields.
Topological quantum error correction in the Kitaev honeycomb model
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Efficient one-way quantum computations for quantum error correction
International Nuclear Information System (INIS)
Huang Wei; Wei Zhaohui
2009-01-01
We show how to explicitly construct an O(nd) size and constant quantum depth circuit which encodes any given n-qubit stabilizer code with d generators. Our construction is derived using the graphic description for stabilizer codes and the one-way quantum computation model. Our result demonstrates how to use cluster states as scalable resources for many multi-qubit entangled states and how to use the one-way quantum computation model to improve the design of quantum algorithms.
Optimal quantum error correcting codes from absolutely maximally entangled states
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics
International Nuclear Information System (INIS)
Sarovar, Mohan; Young, Kevin C
2013-01-01
While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)
Quantum states and their marginals. From multipartite entanglement to quantum error-correcting codes
International Nuclear Information System (INIS)
Huber, Felix Michael
2017-01-01
At the heart of the curious phenomenon of quantum entanglement lies the relation between the whole and its parts. In my thesis, I explore different aspects of this theme in the multipartite setting by drawing connections to concepts from statistics, graph theory, and quantum error-correcting codes: first, I address the case when joint quantum states are determined by their few-body parts and by Jaynes' maximum entropy principle. This can be seen as an extension of the notion of entanglement, with less complex states already being determined by their few-body marginals. Second, I address the conditions for certain highly entangled multipartite states to exist. In particular, I present the solution of a long-standing open problem concerning the existence of an absolutely maximally entangled state on seven qubits. This sheds light on the algebraic properties of pure quantum states, and on the conditions that constrain the sharing of entanglement amongst multiple particles. Third, I investigate Ulam's graph reconstruction problems in the quantum setting, and obtain legitimacy conditions of a set of states to be the reductions of a joint graph state. Lastly, I apply and extend the weight enumerator machinery from quantum error correction to investigate the existence of codes and highly entangled states in higher dimensions. This clarifies the physical interpretation of the weight enumerators and of the quantum MacWilliams identity, leading to novel applications in multipartite entanglement.
2016-08-24
to the seven-qubit Steane code [29] and also represents the smallest instance of a 2D topological color code [30]. Since the realized quantum error...Quantum Computations on a Topologically Encoded Qubit, Science 345, 302 (2014). [17] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D...Memory, J. Math . Phys. (N.Y.) 43, 4452 (2002). [20] B. M. Terhal, Quantum Error Correction for Quantum Memories, Rev. Mod. Phys. 87, 307 (2015). [21] D
Bound on quantum computation time: Quantum error correction in a critical environment
International Nuclear Information System (INIS)
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2010-01-01
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present
Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol
International Nuclear Information System (INIS)
Horoshko, D B
2007-01-01
The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)
Remote one-qubit information concentration and decoding of operator quantum error-correction codes
International Nuclear Information System (INIS)
Hsu Liyi
2007-01-01
We propose the general scheme of remote one-qubit information concentration. To achieve the task, the Bell-correlated mixed states are exploited. In addition, the nonremote one-qubit information concentration is equivalent to the decoding of the quantum error-correction code. Here we propose how to decode the stabilizer codes. In particular, the proposed scheme can be used for the operator quantum error-correction codes. The encoded state can be recreated on the errorless qubit, regardless how many bit-flip errors and phase-flip errors have occurred
Quantum mean-field decoding algorithm for error-correcting codes
International Nuclear Information System (INIS)
Inoue, Jun-ichi; Saika, Yohei; Okada, Masato
2009-01-01
We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Methodology for bus layout for topological quantum error correcting codes
Energy Technology Data Exchange (ETDEWEB)
Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)
2016-12-15
Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)
Optimally combining dynamical decoupling and quantum error correction.
Paz-Silva, Gerardo A; Lidar, D A
2013-01-01
Quantum control and fault-tolerant quantum computing (FTQC) are two of the cornerstones on which the hope of realizing a large-scale quantum computer is pinned, yet only preliminary steps have been taken towards formalizing the interplay between them. Here we explore this interplay using the powerful strategy of dynamical decoupling (DD), and show how it can be seamlessly and optimally integrated with FTQC. To this end we show how to find the optimal decoupling generator set (DGS) for various subspaces relevant to FTQC, and how to simultaneously decouple them. We focus on stabilizer codes, which represent the largest contribution to the size of the DGS, showing that the intuitive choice comprising the stabilizers and logical operators of the code is in fact optimal, i.e., minimizes a natural cost function associated with the length of DD sequences. Our work brings hybrid DD-FTQC schemes, and their potentially considerable advantages, closer to realization.
Quantum Error Correction: Optimal, Robust, or Adaptive? Or, Where is The Quantum Flyball Governor?
Kosut, Robert; Grace, Matthew
2012-02-01
In The Human Use of Human Beings: Cybernetics and Society (1950), Norbert Wiener introduces feedback control in this way: ``This control of a machine on the basis of its actual performance rather than its expected performance is known as feedback ... It is the function of control ... to produce a temporary and local reversal of the normal direction of entropy.'' The classic classroom example of feedback control is the all-mechanical flyball governor used by James Watt in the 18th century to regulate the speed of rotating steam engines. What is it that is so compelling about this apparatus? First, it is easy to understand how it regulates the speed of a rotating steam engine. Secondly, and perhaps more importantly, it is a part of the device itself. A naive observer would not distinguish this mechanical piece from all the rest. So it is natural to ask, where is the all-quantum device which is self regulating, ie, the Quantum Flyball Governor? Is the goal of quantum error correction (QEC) to design such a device? Devloping the computational and mathematical tools to design this device is the topic of this talk.
Passive quantum error correction of linear optics networks through error averaging
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
An integrity measure to benchmark quantum error correcting memories
Xu, Xiaosi; de Beaudrap, Niel; O'Gorman, Joe; Benjamin, Simon C.
2018-02-01
Rapidly developing experiments across multiple platforms now aim to realise small quantum codes, and so demonstrate a memory within which a logical qubit can be protected from noise. There is a need to benchmark the achievements in these diverse systems, and to compare the inherent power of the codes they rely upon. We describe a recently introduced performance measure called integrity, which relates to the probability that an ideal agent will successfully ‘guess’ the state of a logical qubit after a period of storage in the memory. Integrity is straightforward to evaluate experimentally without state tomography and it can be related to various established metrics such as the logical fidelity and the pseudo-threshold. We offer a set of experimental milestones that are steps towards demonstrating unconditionally superior encoded memories. Using intensive numerical simulations we compare memories based on the five-qubit code, the seven-qubit Steane code, and a nine-qubit code which is the smallest instance of a surface code; we assess both the simple and fault-tolerant implementations of each. While the ‘best’ code upon which to base a memory does vary according to the nature and severity of the noise, nevertheless certain trends emerge.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Directory of Open Access Journals (Sweden)
Nicolai Lang, Hans Peter Büchler
2018-01-01
Full Text Available Active quantum error correction on topological codes is one of the most promising routes to long-term qubit storage. In view of future applications, the scalability of the used decoding algorithms in physical implementations is crucial. In this work, we focus on the one-dimensional Majorana chain and construct a strictly local decoder based on a self-dual cellular automaton. We study numerically and analytically its performance and exploit these results to contrive a scalable decoder with exponentially growing decoherence times in the presence of noise. Our results pave the way for scalable and modular designs of actively corrected one-dimensional topological quantum memories.
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Energy Technology Data Exchange (ETDEWEB)
Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)
2015-06-23
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
Zhang, Zhanjun
2004-01-01
Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected
Potts glass reflection of the decoding threshold for qudit quantum error correcting codes
Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.
We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).
Precursors, gauge invariance, and quantum error correction in AdS/CFT
Energy Technology Data Exchange (ETDEWEB)
Freivogel, Ben; Jefferson, Robert A.; Kabir, Laurens [ITFA and GRAPPA, Universiteit van Amsterdam,Science Park 904, Amsterdam (Netherlands)
2016-04-19
A puzzling aspect of the AdS/CFT correspondence is that a single bulk operator can be mapped to multiple different boundary operators, or precursors. By improving upon a recent model of Mintun, Polchinski, and Rosenhaus, we demonstrate explicitly how this ambiguity arises in a simple model of the field theory. In particular, we show how gauge invariance in the boundary theory manifests as a freedom in the smearing function used in the bulk-boundary mapping, and explicitly show how this freedom can be used to localize the precursor in different spatial regions. We also show how the ambiguity can be understood in terms of quantum error correction, by appealing to the entanglement present in the CFT. The concordance of these two approaches suggests that gauge invariance and entanglement in the boundary field theory are intimately connected to the reconstruction of local operators in the dual spacetime.
Correcting errors in a quantum gate with pushed ions via optimal control
International Nuclear Information System (INIS)
Poulsen, Uffe V.; Sklarz, Shlomo; Tannor, David; Calarco, Tommaso
2010-01-01
We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high fidelity compatible with scalable fault-tolerant quantum computing.
International Nuclear Information System (INIS)
Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C
2017-01-01
The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)
Correcting errors in a quantum gate with pushed ions via optimal control
DEFF Research Database (Denmark)
Poulsen, Uffe Vestergaard; Sklarz, Shlomo; Tannor, David
2010-01-01
We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types...... of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high...
Quantum error correction of continuous-variable states against Gaussian noise
Energy Technology Data Exchange (ETDEWEB)
Ralph, T. C. [Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072 (Australia)
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
International Nuclear Information System (INIS)
Kim, Isaac H.
2011-01-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2008-07-01
We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.
International Nuclear Information System (INIS)
Paz, Juan Pablo; Roncaglia, Augusto Jose; Saraceno, Marcos
2005-01-01
We analyze and further develop a method to represent the quantum state of a system of n qubits in a phase-space grid of NxN points (where N=2 n ). The method, which was recently proposed by Wootters and co-workers (Gibbons et al., Phys. Rev. A 70, 062101 (2004).), is based on the use of the elements of the finite field GF(2 n ) to label the phase-space axes. We present a self-contained overview of the method, we give insights into some of its features, and we apply it to investigate problems which are of interest for quantum-information theory: We analyze the phase-space representation of stabilizer states and quantum error-correction codes and present a phase-space solution to the so-called mean king problem
Indian Academy of Sciences (India)
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Tripartite entanglement in qudit stabilizer states and application in quantum error correction
Energy Technology Data Exchange (ETDEWEB)
Looi, Shiang Yong; Griffiths, Robert B. [Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
Consider a stabilizer state on n qudits, each of dimension D with D being a prime or squarefree integer, divided into three mutually disjoint sets or parts. Generalizing a result of Bravyi et al.[J. Math. Phys. 47, 062106 (2006)] for qubits (D=2), we show that up to local unitaries, the three parts of the state can be written as tensor product of unentangled signle-qudit states, maximally entangled Einstein-Podolsky-Rosen (EPR) pairs, and tripartite Greenberger-Horne-Zeilinger (GHZ) states. We employ this result to obtain a complete characterization of the properties of a class of channels associated with stabilizer error-correcting codes, along with their complementary channels.
Correction of refractive errors
Directory of Open Access Journals (Sweden)
Vladimir Pfeifer
2005-10-01
Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Quantum quasi-cyclic low-density parity-check error-correcting codes
International Nuclear Information System (INIS)
Yuan, Li; Gui-Hua, Zeng; Lee, Moon Ho
2009-01-01
In this paper, we propose the approach of employing circulant permutation matrices to construct quantum quasicyclic (QC) low-density parity-check (LDPC) codes. Using the proposed approach one may construct some new quantum codes with various lengths and rates of no cycles-length 4 in their Tanner graphs. In addition, these constructed codes have the advantages of simple implementation and low-complexity encoding. Finally, the decoding approach for the proposed quantum QC LDPC is investigated. (general)
Gaussian Error Correction of Quantum States in a Correlated Noisy Channel
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Berni, Adriano; Madsen, Lars Skovgaard
2013-01-01
Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated...... noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving...... the performance of various quantum information protocols....
Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding
DEFF Research Database (Denmark)
Hansen, Johan Peder
We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
International Nuclear Information System (INIS)
Heid, Matthias; Luetkenhaus, Norbert
2006-01-01
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered
Error Correcting Codes -34 ...
Indian Academy of Sciences (India)
information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.
Indian Academy of Sciences (India)
successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
Error correcting coding for OTN
DEFF Research Database (Denmark)
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....
Initialization Errors in Quantum Data Base Recall
Natu, Kalyani
2016-01-01
This paper analyzes the relationship between initialization error and recall of a specific memory in the Grover algorithm for quantum database search. It is shown that the correct memory is obtained with high probability even when the initial state is far removed from the correct one. The analysis is done by relating the variance of error in the initial state to the recovery of the correct memory and the surprising result is obtained that the relationship between the two is essentially linear.
International Nuclear Information System (INIS)
Salas, P.J.; Sanz, A.L.
2004-01-01
In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10 -4 ≤ε≤10 -2 for memory errors and 3x10 -5 ≤γ/7≤10 -2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits
Erratum: Quantum corrections and black hole spectroscopy
Jiang, Qing-Quan; Han, Yan; Cai, Xu
2012-06-01
In my paper [Qing-Quan Jiang, Yan Han, Xu Cai, Quantum corrections and black hole spectroscopy, JHEP 08 (2010) 049], there was an error in deriving the black hole spectroscopy. In this erratum, we attempt to rectify them.
DEFF Research Database (Denmark)
Martinez Peñas, Umberto; Pellikaan, Ruud
2017-01-01
Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...
Tackling systematic errors in quantum logic gates with composite rotations
International Nuclear Information System (INIS)
Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.
2003-01-01
We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Error correction and degeneracy in surface codes suffering loss
International Nuclear Information System (INIS)
Stace, Thomas M.; Barrett, Sean D.
2010-01-01
Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.
Error forecasting schemes of error correction at receiver
International Nuclear Information System (INIS)
Bhunia, C.T.
2007-08-01
To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)
Towards self-correcting quantum memories
Michnicki, Kamil
This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real
Universality of quantum gravity corrections.
Das, Saurya; Vagenas, Elias C
2008-11-28
We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.
Self-correcting quantum computers
International Nuclear Information System (INIS)
Bombin, H; Chhajlany, R W; Horodecki, M; Martin-Delgado, M A
2013-01-01
Is the notion of a quantum computer (QC) resilient to thermal noise unphysical? We address this question from a constructive perspective and show that local quantum Hamiltonian models provide self-correcting QCs. To this end, we first give a sufficient condition on the connectedness of excitations for a stabilizer code model to be a self-correcting quantum memory. We then study the two main examples of topological stabilizer codes in arbitrary dimensions and establish their self-correcting capabilities. Also, we address the transversality properties of topological color codes, showing that six-dimensional color codes provide a self-correcting model that allows the transversal and local implementation of a universal set of operations in seven spatial dimensions. Finally, we give a procedure for initializing such quantum memories at finite temperature. (paper)
Opportunistic Error Correction for WLAN Applications
Shao, X.; Schiphorst, Roelof; Slump, Cornelis H.
2008-01-01
The current error correction layer of IEEE 802.11a WLAN is designed for worst case scenarios, which often do not apply. In this paper, we propose a new opportunistic error correction layer based on Fountain codes and a resolution adaptive ADC. The key part in the new proposed system is that only
Robot learning and error correction
Friedman, L.
1977-01-01
A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.
Mean Field Analysis of Quantum Annealing Correction.
Matsuura, Shunji; Nishimori, Hidetoshi; Albash, Tameem; Lidar, Daniel A
2016-06-03
Quantum annealing correction (QAC) is a method that combines encoding with energy penalties and decoding to suppress and correct errors that degrade the performance of quantum annealers in solving optimization problems. While QAC has been experimentally demonstrated to successfully error correct a range of optimization problems, a clear understanding of its operating mechanism has been lacking. Here we bridge this gap using tools from quantum statistical mechanics. We study analytically tractable models using a mean-field analysis, specifically the p-body ferromagnetic infinite-range transverse-field Ising model as well as the quantum Hopfield model. We demonstrate that for p=2, where the phase transition is of second order, QAC pushes the transition to increasingly larger transverse field strengths. For p≥3, where the phase transition is of first order, QAC softens the closing of the gap for small energy penalty values and prevents its closure for sufficiently large energy penalty values. Thus QAC provides protection from excitations that occur near the quantum critical point. We find similar results for the Hopfield model, thus demonstrating that our conclusions hold in the presence of disorder.
Volterra Filtering for ADC Error Correction
Directory of Open Access Journals (Sweden)
J. Saliga
2001-09-01
Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Efficient decoding of random errors for quantum expander codes
Fawzi , Omar; Grospellier , Antoine; Leverrier , Anthony
2017-01-01
We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Z\\'emor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottes...
Scalable error correction in distributed ion trap computers
International Nuclear Information System (INIS)
Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.
2006-01-01
A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment
Method for decoupling error correction from privacy amplification
Energy Technology Data Exchange (ETDEWEB)
Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)
2003-04-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.
Method for decoupling error correction from privacy amplification
International Nuclear Information System (INIS)
Lo, Hoi-Kwong
2003-01-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof
Controlling qubit drift by recycling error correction syndromes
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
Statistical mechanics of error-correcting codes
Kabashima, Y.; Saad, D.
1999-01-01
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Decodoku: Quantum error rorrection as a simple puzzle game
Wootton, James
To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.
Practical, Reliable Error Bars in Quantum Tomography
Faist, Philippe; Renner, Renato
2015-01-01
Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...
Error-correcting pairs for a public-key cryptosystem
International Nuclear Information System (INIS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-01-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t -bounded decoding algorithms which is achieved in the case the code has a t -error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t -ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t -error correcting pair. (paper)
ecco: An error correcting comparator theory.
Ghirlanda, Stefano
2018-03-08
Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.
Triple-Error-Correcting Codec ASIC
Jones, Robert E.; Segallis, Greg P.; Boyd, Robert
1994-01-01
Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.
Leading quantum correction to the Newtonian potential
International Nuclear Information System (INIS)
Donoghue, J.F.
1994-01-01
I argue that the leading quantum corrections, in powers of the energy or inverse powers of the distance, may be computed in quantum gravity through knowledge of only the low-energy structure of the theory. As an example, I calculate the leading quantum corrections to the Newtonian gravitational potential
Machine-learning-assisted correction of correlated qubit errors in a topological code
Directory of Open Access Journals (Sweden)
Paul Baireuther
2018-01-01
Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.
Higher order corrections in quantum electrodynamics
International Nuclear Information System (INIS)
Rafael, E.
1977-01-01
Theoretical contributions to high-order corrections in purely leptonic systems, such as electrons and muons, muonium (μ + e - ) and positronium (e + e - ), are reviewed to establish the validity of quantum electrodynamics (QED). Two types of QED contributions to the anomalous magnetic moments are considered, from diagrams with one fermion type lines and those witn two fermion type lines. The contributions up to eighth order are compared to the data available with a different accuracy. Good agreement is stated within the experimental errors. The experimental accuracy of the muonium hyperfine structure and of the radiative corrections to the decay of positronium are compared to the one attainable in theoretical calculations. The need for a higher precision in both experimental data and theoretical calculations is stated
Group representations, error bases and quantum codes
Energy Technology Data Exchange (ETDEWEB)
Knill, E
1996-01-01
This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.
New decoding methods of interleaved burst error-correcting codes
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Vinci, Walter; Lidar, Daniel A.
2018-02-01
Nested quantum annealing correction (NQAC) is an error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. The encoding replaces each logical qubit by a complete graph of degree C . The nesting level C represents the distance of the error-correcting code and controls the amount of protection against thermal and control errors. Theoretical mean-field analyses and empirical data obtained with a D-Wave Two quantum annealer (supporting up to 512 qubits) showed that NQAC has the potential to achieve a scalable effective-temperature reduction, Teff˜C-η , with 0 temperature of a quantum annealer. Such effective-temperature reduction is relevant for machine-learning applications. Since we demonstrate that NQAC achieves error correction via a reduction of the effective-temperature of the quantum annealing device, our results address the problem of the "temperature scaling law for quantum annealers," which requires the temperature of quantum annealers to be reduced as problems of larger sizes are attempted to be solved.
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander
2010-01-01
A fundamental requirement for enabling fault-tolerant quantum information processing is an efficient quantum error-correcting code that robustly protects the involved fragile quantum states from their environment. Just as classical error-correcting codes are indispensible in today's information...... technologies, it is believed that quantum error-correcting code will play a similarly crucial role in tomorrow's quantum information systems. Here, we report on the experimental demonstration of a quantum erasure-correcting code that overcomes the devastating effect of photon losses. Our quantum code is based...... on linear optics, and it protects a four-mode entangled mesoscopic state of light against erasures. We investigate two approaches for circumventing in-line losses, and demonstrate that both approaches exhibit transmission fidelities beyond what is possible by classical means. Because in-line attenuation...
Laforest, Martin
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for
Error-correction coding for digital communications
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Quantum corrections to Schwarzschild black hole
Energy Technology Data Exchange (ETDEWEB)
Calmet, Xavier; El-Menoufi, Basem Kamal [University of Sussex, Department of Physics and Astronomy, Brighton (United Kingdom)
2017-04-15
Using effective field theory techniques, we compute quantum corrections to spherically symmetric solutions of Einstein's gravity and focus in particular on the Schwarzschild black hole. Quantum modifications are covariantly encoded in a non-local effective action. We work to quadratic order in curvatures simultaneously taking local and non-local corrections into account. Looking for solutions perturbatively close to that of classical general relativity, we find that an eternal Schwarzschild black hole remains a solution and receives no quantum corrections up to this order in the curvature expansion. In contrast, the field of a massive star receives corrections which are fully determined by the effective field theory. (orig.)
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Role of memory errors in quantum repeaters
International Nuclear Information System (INIS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.
2007-01-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication
Metrics with vanishing quantum corrections
International Nuclear Information System (INIS)
Coley, A A; Hervik, S; Gibbons, G W; Pope, C N
2008-01-01
We investigate solutions of the classical Einstein or supergravity equations that solve any set of quantum corrected Einstein equations in which the Einstein tensor plus a multiple of the metric is equated to a symmetric conserved tensor T μν (g αβ , ∂ τ g αβ , ∂ τ ∂ σ g αβ , ...,) constructed from sums of terms, the involving contractions of the metric and powers of arbitrary covariant derivatives of the curvature tensor. A classical solution, such as an Einstein metric, is called universal if, when evaluated on that Einstein metric, T μν is a multiple of the metric. A Ricci flat classical solution is called strongly universal if, when evaluated on that Ricci flat metric, T μν vanishes. It is well known that pp-waves in four spacetime dimensions are strongly universal. We focus attention on a natural generalization; Einstein metrics with holonomy Sim(n - 2) in which all scalar invariants are zero or constant. In four dimensions we demonstrate that the generalized Ghanam-Thompson metric is weakly universal and that the Goldberg-Kerr metric is strongly universal; indeed, we show that universality extends to all four-dimensional Sim(2) Einstein metrics. We also discuss generalizations to higher dimensions
Nonadiabatic corrections to a quantum dot quantum computer
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 83; Issue 1. Nonadiabatic corrections to a quantum dot quantum computer working in adiabatic limit. M Ávila ... The time of operation of an adiabatic quantum computer must be less than the decoherence time, otherwise the computer would be nonoperative. So far, the ...
Quantum state correction of relic gravitons from quantum gravity
Rosales, Jose-Luis
1996-01-01
The semiclassical approach to quantum gravity would yield the Schroedinger formalism for the wave function of metric perturbations or gravitons plus quantum gravity correcting terms in pure gravity; thus, in the inflationary scenario, we should expect correcting effects to the relic graviton (Zel'dovich) spectrum of the order (H/mPl)^2.
Large-scale simulations of error-prone quantum computation devices
International Nuclear Information System (INIS)
Trieu, Doan Binh
2009-01-01
The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i
Large-scale simulations of error-prone quantum computation devices
Energy Technology Data Exchange (ETDEWEB)
Trieu, Doan Binh
2009-07-01
The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced
Fast, efficient error reconciliation for quantum cryptography
International Nuclear Information System (INIS)
Buttler, W.T.; Lamoreaux, S.K.; Torgerson, J.R.; Nickel, G.H.; Donahue, C.H.; Peterson, C.G.
2003-01-01
We describe an error-reconciliation protocol, which we call Winnow, based on the exchange of parity and Hamming's 'syndrome' for N-bit subunits of a large dataset. The Winnow protocol was developed in the context of quantum-key distribution and offers significant advantages and net higher efficiency compared to other widely used protocols within the quantum cryptography community. A detailed mathematical analysis of the Winnow protocol is presented in the context of practical implementations of quantum-key distribution; in particular, the information overhead required for secure implementation is one of the most important criteria in the evaluation of a particular error-reconciliation protocol. The increase in efficiency for the Winnow protocol is largely due to the reduction in authenticated public communication required for its implementation
Polynomial theory of error correcting codes
Cancellieri, Giovanni
2015-01-01
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Error-Transparent Quantum Gates for Small Logical Qubit Architectures
Kapit, Eliot
2018-02-01
One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.
Joint Schemes for Physical Layer Security and Error Correction
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Quantum Corrections to the 'Atomistic' MOSFET Simulations
Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.
2000-01-01
We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
Nonadiabatic corrections to a quantum dot quantum computer ...
Indian Academy of Sciences (India)
2014-07-02
Jul 2, 2014 ... corrections in it. If the decoherence times of a quantum dot computer are ∼100 ns [J M Kikkawa and D D Awschalom, Phys. Rev. Lett. 80, 4313 (1998)] then the predicted number of one qubit gate (primitive) operations of the Loss–DiVincenzo quantum computer in such an interval of time must be >1010.
Fault-tolerant quantum computing in the Pauli or Clifford frame with slow error diagnostics
Directory of Open Access Journals (Sweden)
Christopher Chamberland
2018-01-01
Full Text Available We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Leading quantum gravitational corrections to scalar QED
International Nuclear Information System (INIS)
Bjerrum-Bohr, N.E.J.
2002-01-01
We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged scalars in the combined theory of general relativity and scalar QED. The combined theory is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativistic limit. The non-analytical parts of the scattering amplitude, which are known to give the long range, low energy, leading quantum corrections, are used to construct the leading post-Newtonian and quantum corrections to the two-particle non-relativistic scattering matrix potential for two charged scalars. The result is discussed in relation to experimental verifications
Energy efficiency of error correction on wireless systems
Havinga, Paul J.M.
1999-01-01
Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.
Inflationary power spectra with quantum holonomy corrections
Energy Technology Data Exchange (ETDEWEB)
Mielczarek, Jakub, E-mail: jakub.mielczarek@uj.edu.pl [Institute of Physics, Jagiellonian University, Reymonta 4, Cracow, 30-059 Poland (Poland)
2014-03-01
In this paper we study slow-roll inflation with holonomy corrections from loop quantum cosmology. It was previously shown that, in the Planck epoch, these corrections lead to such effects as singularity avoidance, metric signature change and a state of silence. Here, we consider holonomy corrections affecting the phase of cosmic inflation, which takes place away from the Planck epoch. Both tensor and scalar power spectra of primordial inflationary perturbations are computed up to the first order in slow-roll parameters and V/ρ{sub c}, where V is a potential of the scalar field and ρ{sub c} is a critical energy density (expected to be of the order of the Planck energy density). Possible normalizations of modes at short scales are discussed. In case the normalization is performed with use of the Wronskian condition applied to adiabatic vacuum, the tensor and scalar spectral indices are not quantum corrected in the leading order. However, by choosing an alternative method of normalization one can obtain quantum corrections in the leading order. Furthermore, we show that the holonomy-corrected equations of motion for tensor and scalar modes can be derived based on effective background metrics. This allows us to show that the classical Wronskian normalization condition is well defined for the cosmological perturbations with holonomy corrections.
Underlying Information Technology Tailored Quantum Error Correction
2006-07-28
typically constructed by using an optical beam splitter . • We used a decoherence-free-subspace encoding to reduce the sensitivity of an optical Deutsch...process tomography on one- and two-photon polarisation states, from full and partial data "• Accomplished complete two-photon QPT. "• Discovered surprising...protocol giving a quadratic speedup over all previously known such protocols. • Developed the first completely positive non -Markovian master equation
Quantum corrections to inflaton and curvaton dynamics
Energy Technology Data Exchange (ETDEWEB)
Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@nbi.dk [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2012-11-01
We compute the fully renormalized one-loop effective action for two interacting and self-interacting scalar fields in FRW space-time. We then derive and solve the quantum corrected equations of motion both for fields that dominate the energy density (such as an inflaton) and fields that do not (such as a subdominant curvaton). In particular, we introduce quantum corrected Friedmann equations that determine the evolution of the scale factor. We find that in general, gravitational corrections are negligible for the field dynamics. For the curvaton-type fields this leaves only the effect of the flat-space Coleman-Weinberg-type effective potential, and we find that these can be significant. For the inflaton case, both the corrections to the potential and the Friedmann equations can lead to behaviour very different from the classical evolution. Even to the point that inflation, although present at tree level, can be absent at one-loop order.
Error correcting circuit design with carbon nanotube field effect transistors
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
VLSI architectures for modern error-correcting codes
Zhang, Xinmiao
2015-01-01
Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI
Quantum-electrodynamics corrections in pionic hydrogen
Schlesser, S.; Le Bigot, E. -O.; Indelicato, P.; Pachucki, K.
2011-01-01
We investigate all pure quantum-electrodynamics corrections to the np --> 1s, n = 2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order alpha 5. These values are needed to extract an accurate strong interaction
Continuous-variable quantum erasure correcting code
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Sabuncu, Metin; Huck, Alexander
2010-01-01
We experimentally demonstrate a continuous variable quantum erasure-correcting code, which protects coherent states of light against complete erasure. The scheme encodes two coherent states into a bi-party entangled state, and the resulting 4-mode code is conveyed through 4 independent channels...
Strong Coupling Corrections in Quantum Thermodynamics
Perarnau-Llobet, M.; Wilming, H.; Riera, A.; Gallego, R.; Eisert, J.
2018-03-01
Quantum systems strongly coupled to many-body systems equilibrate to the reduced state of a global thermal state, deviating from the local thermal state of the system as it occurs in the weak-coupling limit. Taking this insight as a starting point, we study the thermodynamics of systems strongly coupled to thermal baths. First, we provide strong-coupling corrections to the second law applicable to general systems in three of its different readings: As a statement of maximal extractable work, on heat dissipation, and bound to the Carnot efficiency. These corrections become relevant for small quantum systems and vanish in first order in the interaction strength. We then move to the question of power of heat engines, obtaining a bound on the power enhancement due to strong coupling. Our results are exemplified on the paradigmatic non-Markovian quantum Brownian motion.
Energy Technology Data Exchange (ETDEWEB)
Fitzpatrick, A. Liam [Department of Physics, Boston University,590 Commonwealth Avenue, Boston, MA 02215 (United States); Kaplan, Jared [Department of Physics and Astronomy, Johns Hopkins University,3400 N. Charles St, Baltimore, MD 21218 (United States)
2016-05-12
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT{sub 2} at large central charge c. The Lyapunov exponent λ{sub L}, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ{sub L}=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ{sub L} that emerges at large c, focusing on CFT{sub 2} and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
International Nuclear Information System (INIS)
Fitzpatrick, A. Liam; Kaplan, Jared
2016-01-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT_2 at large central charge c. The Lyapunov exponent λ_L, which is a diagnostic for the early onset of chaos, receives 1/c corrections that may be interpreted as λ_L=((2π)/β)(1+(12/c)). However, out of time order correlators receive other equally important 1/c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ_L that emerges at large c, focusing on CFT_2 and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Errors and Correction of Precipitation Measurements in China
Institute of Scientific and Technical Information of China (English)
REN Zhihua; LI Mingqin
2007-01-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
Correcting a Persistent Manhattan Project Statistical Error
Reed, Cameron
2011-04-01
In his 1987 autobiography, Major-General Kenneth Nichols, who served as the Manhattan Project's ``District Engineer'' under General Leslie Groves, related that when the Clinton Engineer Works at Oak Ridge, TN, was completed it was consuming nearly one-seventh (~ 14%) of the electric power being generated in the United States. This statement has been reiterated in several editions of a Department of Energy publication on the Manhattan Project. This remarkable claim has been checked against power generation and consumption figures available in Manhattan Engineer District documents, Tennessee Valley Authority records, and historical editions of the Statistical Abstract of the United States. The correct figure is closer to 0.9% of national generation. A speculation will be made as to the origin of Nichols' erroneous one-seventh figure.
Analysis of error-correction constraints in an optical disk
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.
Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian
2016-04-01
While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.
Environment-assisted error correction of single-qubit phase damping
International Nuclear Information System (INIS)
Trendelkamp-Schroer, Benjamin; Helm, Julius; Strunz, Walter T.
2011-01-01
Open quantum system dynamics of random unitary type may in principle be fully undone. Closely following the scheme of environment-assisted error correction proposed by Gregoratti and Werner [J. Mod. Opt. 50, 915 (2003)], we explicitly carry out all steps needed to invert a phase-damping error on a single qubit. Furthermore, we extend the scheme to a mixed-state environment. Surprisingly, we find cases for which the uncorrected state is closer to the desired state than any of the corrected ones.
Self-correcting quantum memory in a thermal environment
International Nuclear Information System (INIS)
Chesi, Stefano; Roethlisberger, Beat; Loss, Daniel
2010-01-01
The ability to store information is of fundamental importance to any computer, be it classical or quantum. To identify systems for quantum memories, which rely, analogously to classical memories, on passive error protection (''self-correction''), is of greatest interest in quantum information science. While systems with topological ground states have been considered to be promising candidates, a large class of them was recently proven unstable against thermal fluctuations. Here, we propose two-dimensional (2D) spin models unaffected by this result. Specifically, we introduce repulsive long-range interactions in the toric code and establish a memory lifetime polynomially increasing with the system size. This remarkable stability is shown to originate directly from the repulsive long-range nature of the interactions. We study the time dynamics of the quantum memory in terms of diffusing anyons and support our analytical results with extensive numerical simulations. Our findings demonstrate that self-correcting quantum memories can exist in 2D at finite temperatures.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors
Directory of Open Access Journals (Sweden)
Chitra Jayathilake
2013-01-01
Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.
Quantum corrections to holographic mutual information
International Nuclear Information System (INIS)
Agón, Cesar A.; Faulkner, Thomas
2016-01-01
We compute the leading contribution to the mutual information (MI) of two disjoint spheres in the large distance regime for arbitrary conformal field theories (CFT) in any dimension. This is achieved by refining the operator product expansion method introduced by Cardy http://dx.doi.org/10.1088/1751-8113/46/28/285402. For CFTs with holographic duals the leading contribution to the MI at long distances comes from bulk quantum corrections to the Ryu-Takayanagi area formula. According to the FLM proposal http://dx.doi.org/10.1007/JHEP11(2013)074 this equals the bulk MI between the two disjoint regions spanned by the boundary spheres and their corresponding minimal area surfaces. We compute this quantum correction and provide in this way a non-trivial check of the FLM proposal.
Quantum corrections to holographic mutual information
Energy Technology Data Exchange (ETDEWEB)
Agón, Cesar A. [Martin Fisher School of Physics, Brandeis University,Waltham, MA 02453 (United States); Faulkner, Thomas [University of Illinois, Urbana-Champaign,Urbana, IL 61801-3080 (United States)
2016-08-22
We compute the leading contribution to the mutual information (MI) of two disjoint spheres in the large distance regime for arbitrary conformal field theories (CFT) in any dimension. This is achieved by refining the operator product expansion method introduced by Cardy http://dx.doi.org/10.1088/1751-8113/46/28/285402. For CFTs with holographic duals the leading contribution to the MI at long distances comes from bulk quantum corrections to the Ryu-Takayanagi area formula. According to the FLM proposal http://dx.doi.org/10.1007/JHEP11(2013)074 this equals the bulk MI between the two disjoint regions spanned by the boundary spheres and their corresponding minimal area surfaces. We compute this quantum correction and provide in this way a non-trivial check of the FLM proposal.
Electromagnetic fields with vanishing quantum corrections
Czech Academy of Sciences Publication Activity Database
Ortaggio, Marcello; Pravda, Vojtěch
2018-01-01
Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www. science direct.com/ science /article/pii/S0370269318300327?via%3Dihub
Electromagnetic fields with vanishing quantum corrections
Czech Academy of Sciences Publication Activity Database
Ortaggio, Marcello; Pravda, Vojtěch
2018-01-01
Roč. 779, 10 April (2018), s. 393-395 ISSN 0370-2693 R&D Projects: GA ČR GA13-10042S Institutional support: RVO:67985840 Keywords : nonlinear electrodynamics * quantum corrections Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 4.807, year: 2016 https://www.sciencedirect.com/science/article/pii/S0370269318300327?via%3Dihub
Leading quantum gravitational corrections to scalar QED
Bjerrum-Bohr, N. E. J.
2002-01-01
We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged scalars in the combined theory of general relativity and scalar QED. The combined theory is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativistic limit. The n...
Quantum corrections to the gravitational backreaction
Energy Technology Data Exchange (ETDEWEB)
Kuntz, Ibere [University of Sussex, Physics and Astronomy, Brighton (United Kingdom)
2018-01-15
Effective field theory techniques are used to study the leading order quantum corrections to the gravitational wave backreaction. The effective stress-energy tensor is calculated and it is shown that it has a non-vanishing trace that contributes to the cosmological constant. By comparing the result obtained with LIGO's data, the first bound on the amplitude of the massive mode is found: ε < 1.4 x 10{sup -33}. (orig.)
On the Design of Error-Correcting Ciphers
Directory of Open Access Journals (Sweden)
Mathur Chetan Nanjunda
2006-01-01
Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison
An investigation of error correcting techniques for OMV and AXAF
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Quantum gravitational corrections for spinning particles
International Nuclear Information System (INIS)
Fröb, Markus B.
2016-01-01
We calculate the quantum corrections to the gauge-invariant gravitational potentials of spinning particles in flat space, induced by loops of both massive and massless matter fields of various types. While the corrections to the Newtonian potential induced by massless conformal matter for spinless particles are well known, and the same corrections due to massless minimally coupled scalars http://dx.doi.org/10.1088/0264-9381/27/24/245008, massless non-conformal scalars http://dx.doi.org/10.1103/PhysRevD.87.104027 and massive scalars, fermions and vector bosons http://dx.doi.org/10.1103/PhysRevD.91.064047 have been recently derived, spinning particles receive additional corrections which are the subject of the present work. We give both fully analytic results valid for all distances from the particle, and present numerical results as well as asymptotic expansions. At large distances from the particle, the corrections due to massive fields are exponentially suppressed in comparison to the corrections from massless fields, as one would expect. However, a surprising result of our analysis is that close to the particle itself, on distances comparable to the Compton wavelength of the massive fields running in the loops, these corrections can be enhanced with respect to the massless case.
Software for Correcting the Dynamic Error of Force Transducers
Directory of Open Access Journals (Sweden)
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
Correcting for particle counting bias error in turbulent flow
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm
Directory of Open Access Journals (Sweden)
J. I. Colless
2018-02-01
Full Text Available Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE, leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H_{2} molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.
Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm
Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.
2018-02-01
Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Energy Technology Data Exchange (ETDEWEB)
Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.
Leading quantum gravitational corrections to QED
Butt, M. S.
2006-01-01
We consider the leading post-Newtonian and quantum corrections to the non-relativistic scattering amplitude of charged spin-1/2 fermions in the combined theory of general relativity and QED. The coupled Dirac-Einstein system is treated as an effective field theory. This allows for a consistent quantization of the gravitational field. The appropriate vertex rules are extracted from the action, and the non-analytic contributions to the 1-loop scattering matrix are calculated in the non-relativi...
Energy Efficient Error-Correcting Coding for Wireless Systems
Shao, X.
2010-01-01
The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required
Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2013-01-01
In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...
Direct cointegration testing in error-correction models
F.R. Kleibergen (Frank); H.K. van Dijk (Herman)
1994-01-01
textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The
Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction
Directory of Open Access Journals (Sweden)
Chi-Huang Shih
2012-01-01
Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.
Quantum-corrected transient analysis of plasmonic nanostructures
Uysal, Ismail Enes; Ulku, Huseyin Arda; Sajjad, Muhammad; Singh, Nirpendra; Schwingenschlö gl, Udo; Bagci, Hakan
2017-01-01
A time domain surface integral equation (TD-SIE) solver is developed for quantum-corrected analysis of transient electromagnetic field interactions on plasmonic nanostructures with sub-nanometer gaps. “Quantum correction” introduces an auxiliary
A precise error bound for quantum phase estimation.
Directory of Open Access Journals (Sweden)
James M Chappell
Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.
Neural network error correction for solving coupled ordinary differential equations
Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.
1992-01-01
A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.
A Quantum Theoretical Explanation for Probability Judgment Errors
Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.
2011-01-01
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…
Atmospheric Error Correction of the Laser Beam Ranging
Directory of Open Access Journals (Sweden)
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
Testing and inference in nonlinear cointegrating vector error correction models
DEFF Research Database (Denmark)
Kristensen, D.; Rahbek, A.
2013-01-01
We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...
Error-finding and error-correcting methods for the start-up of the SLC
International Nuclear Information System (INIS)
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.
1987-02-01
During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper
Forecasting the price of gold: An error correction approach
Directory of Open Access Journals (Sweden)
Kausik Gangopadhyay
2016-03-01
Full Text Available Gold prices in the Indian market may be influenced by a multitude of factors such as the value of gold in investment decisions, as an inflation hedge, and in consumption motives. We develop a model to explain and forecast gold prices in India, using a vector error correction model. We identify investment decision and inflation hedge as prime movers of the data. We also present out-of-sample forecasts of our model and the related properties.
Equation-Method for correcting clipping errors in OFDM signals.
Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry
2016-01-01
Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Gate errors in solid-state quantum-computer architectures
International Nuclear Information System (INIS)
Hu Xuedong; Das Sarma, S.
2002-01-01
We theoretically consider possible errors in solid-state quantum computation due to the interplay of the complex solid-state environment and gate imperfections. In particular, we study two examples of gate operations in the opposite ends of the gate speed spectrum, an adiabatic gate operation in electron-spin-based quantum dot quantum computation and a sudden gate operation in Cooper-pair-box superconducting quantum computation. We evaluate quantitatively the nonadiabatic operation of a two-qubit gate in a two-electron double quantum dot. We also analyze the nonsudden pulse gate in a Cooper-pair-box-based quantum-computer model. In both cases our numerical results show strong influences of the higher excited states of the system on the gate operation, clearly demonstrating the importance of a detailed understanding of the relevant Hilbert-space structure on the quantum-computer operations
Coordinated joint motion control system with position error correction
Danko, George L.
2016-04-05
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Non-binary unitary error bases and quantum codes
Energy Technology Data Exchange (ETDEWEB)
Knill, E.
1996-06-01
Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.
Error Mitigation for Short-Depth Quantum Circuits
Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.
2017-11-01
Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.
Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.
Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A
2017-05-01
Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.
Error of quantum-logic simulation via vector-soliton collisions
International Nuclear Information System (INIS)
Janutka, Andrzej
2007-01-01
In a concept of simulating the quantum logic with vector solitons by the author (Janutka 2006 J. Phys. A: Math. Gen. 39 12505), the soliton polarization is thought of as a state vector of a system of cebits (classical counterparts of qubits) switched via collisions with other solitons. The advantage of this method of information processing compared to schemes using linear optics is the possibility of the determination of the information-register state in a single measurement. Minimization of the information-processing error for different optical realizations of the logical systems is studied in the framework of a quantum analysis of soliton fluctuations. The problem is considered with relevance to general difficulties of the quantum error-correction schemes for the classical analogies of the quantum-information processing
Gravity induced corrections to quantum mechanical wave functions
International Nuclear Information System (INIS)
Singh, T.P.
1990-03-01
We perform a semiclassical expansion in the Wheeler-DeWitt equation, in powers of the gravitational constant. We then show that quantum gravitational fluctuations can provide a correction to the wave-functions which are solutions of the Schroedinger equation for matter. This also implies a correction to the expectation values of quantum mechanical observables. (author). 6 refs
Information-preserving structures: A general framework for quantum zero-error information
International Nuclear Information System (INIS)
Blume-Kohout, Robin; Ng, Hui Khoon; Poulin, David; Viola, Lorenza
2010-01-01
Quantum systems carry information. Quantum theory supports at least two distinct kinds of information (classical and quantum), and a variety of different ways to encode and preserve information in physical systems. A system's ability to carry information is constrained and defined by the noise in its dynamics. This paper introduces an operational framework, using information-preserving structures, to classify all the kinds of information that can be perfectly (i.e., with zero error) preserved by quantum dynamics. We prove that every perfectly preserved code has the same structure as a matrix algebra, and that preserved information can always be corrected. We also classify distinct operational criteria for preservation (e.g., 'noiseless','unitarily correctible', etc.) and introduce two natural criteria for measurement-stabilized and unconditionally preserved codes. Finally, for several of these operational criteria, we present efficient (polynomial in the state-space dimension) algorithms to find all of a channel's information-preserving structures.
[Incidence of refractive errors with corrective aids subsequent selection].
Benes, P; Synek, S; Petrová, S; Sokolová, Sidlová J; Forýtková, L; Holoubková, Z
2012-02-01
This study follows the occurrence of refractive errors in population and the possible selection of the appropriate type of corrective aids. Objective measurement and subsequent determination of the subjective refraction of the eye is on essential act in opotmetric practice. The file represented by 615 patients (1230 eyes) is divided according to the refractive error of myopia, hyperopia and as a control group are listed emetropic clients. The results of objective and subjective values of refraction are compared and statistically processed. The study included 615 respondents. To determine the objective refraction the autorefraktokeratometer with Placido disc was used and the values of spherical and astigmatic correction components, including the axis were recorded. These measurements were subsequently verified and tested subjectively using the trial lenses and the projection optotype to the normal investigative distance of 5 meters. After this the appropriate corrective aids were then recommended. Group I consists of 123 men and 195 women with myopia (n = 635) of clients with an average age 39 +/- 18,9 years. Objective refraction - sphere: -2,57 +/- 2,46 D, cylinder: -1,1 +/- 1,01 D, axis of: 100 degrees +/- 53,16 degrees. Subjective results are as follows--the value of sphere: -2,28 +/- 2,33 D, cylinder -0,63 +/- 0,80 D, axis of: 99,8 degrees +/- 56,64 degrees. Group II is represented hyperopic clients and consists of 67 men and 107 women (n = 348). The average age is 58,84 +/- 16,73 years. Objective refraction has values - sphere: +2,81 +/- 2,21 D, cylinder: -1,0 +/- 0,94 D; axis 95 degree +/- 45,4 degrees. Subsequent determination of subjective refraction has the following results - sphere: +2,28 +/- 2,06 D; cylinder: -0,49 +/- 0,85 D, axis of: 95,9 degrees +/- 46,4 degrees. Group III consists from emetropes whose final minimum viasual acuity was Vmin = 1,0 (5/5) or better. Overall, this control group is represented 52 males and 71 females (n = 247). The average
Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints
Directory of Open Access Journals (Sweden)
Fanqi Meng
2017-12-01
Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.
Quantum money with nearly optimal error tolerance
Amiri, Ryan; Arrazola, Juan Miguel
2017-06-01
We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.
Distance error correction for time-of-flight cameras
Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian
2017-06-01
The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
A new controller for the JET error field correction coils
International Nuclear Information System (INIS)
Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.
2005-01-01
This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios
Cause of depth error of borehole logging and its correction
International Nuclear Information System (INIS)
Iida, Yoshimasa; Ikeda, Koki; Tsuruta, Tadahiko; Ito, Hiroaki; Goto, Junichi.
1996-01-01
Data by borehole logging can be used for detailed analysis of geological structures. Depths measured by portable borehole loggers commonly shift a few meters on the level of 400 to 500 meters deep. Therefore, the cause of depth error has to be recognized to make proper corrections for detailed structural analysis. Correlation between depths of drill core and in-rod radiometric logging has been performed in detail on exploration drill holes in the Athabasca basin, Canada. As a result, a common tendency of logging depth shift has been recognized, and an empirical formula (quadratic equation) for this has been obtained. The physical meaning of the formula and the cause of the depth error has been considered. (author)
Random access to mobile networks with advanced error correction
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate
Polio, Charlene
2012-01-01
The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM
Directory of Open Access Journals (Sweden)
David Kaluge
2017-03-01
Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.
ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS
Directory of Open Access Journals (Sweden)
K. Jacobsen
2016-06-01
Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS
Efficiently characterizing the total error in quantum circuits
Carignan-Dugas, Arnaud; Wallman, Joel J.; Emerson, Joseph
A promising technological advancement meant to enlarge our computational means is the quantum computer. Such a device would harvest the quantum complexity of the physical world in order to unfold concrete mathematical problems more efficiently. However, the errors emerging from the implementation of quantum operations are likewise quantum, and hence share a similar level of intricacy. Fortunately, randomized benchmarking protocols provide an efficient way to characterize the operational noise within quantum devices. The resulting figures of merit, like the fidelity and the unitarity, are typically attached to a set of circuit components. While important, this doesn't fulfill the main goal: determining if the error rate of the total circuit is small enough in order to trust its outcome. In this work, we fill the gap by providing an optimal bound on the total fidelity of a circuit in terms of component-wise figures of merit. Our bound smoothly interpolates between the classical regime, in which the error rate grows linearly in the circuit's length, and the quantum regime, which can naturally allow quadratic growth. Conversely, our analysis substantially improves the bounds on single circuit element fidelities obtained through techniques such as interleaved randomized benchmarking. This research was supported by the U.S. Army Research Office through Grant W911NF- 14-1-0103, CIFAR, the Government of Ontario, and the Government of Canada through NSERC and Industry Canada.
Minimum-error discrimination of entangled quantum states
International Nuclear Information System (INIS)
Lu, Y.; Coish, N.; Kaltenbaek, R.; Hamel, D. R.; Resch, K. J.; Croke, S.
2010-01-01
Strategies to optimally discriminate between quantum states are critical in quantum technologies. We present an experimental demonstration of minimum-error discrimination between entangled states, encoded in the polarization of pairs of photons. Although the optimal measurement involves projection onto entangled states, we use a result of J. Walgate et al. [Phys. Rev. Lett. 85, 4972 (2000)] to design an optical implementation employing only local polarization measurements and feed-forward, which performs at the Helstrom bound. Our scheme can achieve perfect discrimination of orthogonal states and minimum-error discrimination of nonorthogonal states. Our experimental results show a definite advantage over schemes not using feed-forward.
Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage
Directory of Open Access Journals (Sweden)
Juha Partala
2017-01-01
Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.
Nuclear Quantum Gravitation - The Correct Theory
Kotas, Ronald
2016-03-01
Nuclear Quantum Gravitation provides a clear, definitive Scientific explanation of Gravity and Gravitation. It is harmonious with Newtonian and Quantum Mechanics, and with distinct Scientific Logic. Nuclear Quantum Gravitation has 10 certain, Scientific proofs and 21 more good indications. With this theory the Physical Forces are obviously Unified. See: OBSCURANTISM ON EINSTEIN GRAVITATION? http://www.santilli- Foundation.org/inconsistencies-gravitation.php and Einstein's Theory of Relativity versus Classical Mechanics http://www.newtonphysics.on.ca/einstein/
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
Error field and its correction strategy in tokamaks
International Nuclear Information System (INIS)
In, Yongkyoon
2014-01-01
While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Xiao, Hailin [Wenzhou University, College of Physics and Electronic Information Engineering, Wenzhou (China); Southeast University, National Mobile Communications Research Laboratory, Nanjing (China); Guilin University of Electronic Technology, Ministry of Education, Key Laboratory of Cognitive Radio and Information Processing, Guilin (China); Zhang, Zhongshan [University of Science and Technology Beijing, Beijing Engineering and Technology Research Center for Convergence Networks and Ubiquitous Services, Beijing (China); Chronopoulos, Anthony Theodore [University of Texas at San Antonio, Department of Computer Science, San Antonio, TX (United States)
2017-10-15
In quantum computing, nice error bases as generalization of the Pauli basis were introduced by Knill. These bases are known to be projective representations of finite groups. In this paper, we propose a group representation approach to the study of quantum stabilizer codes. We utilize this approach to define decoherence-free subspaces (DFSs). Unlike previous studies of DFSs, this type of DFSs does not involve any spatial symmetry assumptions on the system-environment interaction. Thus, it can be used to construct quantum error-avoiding codes (QEACs) that are fault tolerant automatically. We also propose a new simple construction of QEACs and subsequently develop several classes of QEACs. Finally, we present numerical simulation results encoding the logical error rate over physical error rate on the fidelity performance of these QEACs. Our study demonstrates that DFSs-based QEACs are capable of providing a generalized and unified framework for error-avoiding methods. (orig.)
Editing disulphide bonds: error correction using redox currencies.
Ito, Koreaki
2010-01-01
The disulphide bond-introducing enzyme of bacteria, DsbA, sometimes oxidizes non-native cysteine pairs. DsbC should rearrange the resulting incorrect disulphide bonds into those with correct connectivity. DsbA and DsbC receive oxidizing and reducing equivalents, respectively, from respective redox components (quinones and NADPH) of the cell. Two mechanisms of disulphide bond rearrangement have been proposed. In the redox-neutral 'shuffling' mechanism, the nucleophilic cysteine in the DsbC active site forms a mixed disulphide with a substrate and induces disulphide shuffling within the substrate part of the enzyme-substrate complex, followed by resolution into a reduced enzyme and a disulphide-rearranged substrate. In the 'reduction-oxidation' mechanism, DsbC reduces those substrates with wrong disulphides so that DsbA can oxidize them again. In this issue of Molecular Microbiology, Berkmen and his collaborators show that a disulphide reductase, TrxP, from an anaerobic bacterium can substitute for DsbC in Escherichia coli. They propose that the reduction-oxidation mechanism of disulphide rearrangement can indeed operate in vivo. An implication of this work is that correcting errors in disulphide bonds can be coupled to cellular metabolism and is conceptually similar to the proofreading processes observed with numerous synthesis and maturation reactions of biological macromolecules.
Quantum loop corrections of a charged de Sitter black hole
Naji, J.
2018-03-01
A charged black hole in de Sitter (dS) space is considered and logarithmic corrected entropy used to study its thermodynamics. Logarithmic corrections of entropy come from thermal fluctuations, which play a role of quantum loop correction. In that case we are able to study the effect of quantum loop on black hole thermodynamics and statistics. As a black hole is a gravitational object, it helps to obtain some information about the quantum gravity. The first and second laws of thermodynamics are investigated for the logarithmic corrected case and we find that it is only valid for the charged dS black hole. We show that the black hole phase transition disappears in the presence of logarithmic correction.
Error estimates for discretized quantum stochastic differential inclusions
International Nuclear Information System (INIS)
Ayoola, E.O.
2001-09-01
This paper is concerned with the error estimates involved in the solution of a discrete approximation of a quantum stochastic differential inclusion (QSDI). Our main results rely on certain properties of the averaged modulus of continuity for multivalued sesquilinear forms associated with QSDI. We obtained results concerning the estimates of the Hausdorff distance between the set of solutions of the QSDI and the set of solutions of its discrete approximation. This extend the results of Dontchev and Farkhi concerning classical differential inclusions to the present noncommutative Quantum setting involving inclusions in certain locally convex space. (author)
THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING
Directory of Open Access Journals (Sweden)
Ketut Santi Indriani
2015-05-01
Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.
International Nuclear Information System (INIS)
Yu Watanabe; Masahito Ueda
2012-01-01
Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the
Buterakos, Donovan; Throckmorton, Robert E.; Das Sarma, S.
2018-01-01
In addition to magnetic field and electric charge noise adversely affecting spin-qubit operations, performing single-qubit gates on one of multiple coupled singlet-triplet qubits presents a new challenge: crosstalk, which is inevitable (and must be minimized) in any multiqubit quantum computing architecture. We develop a set of dynamically corrected pulse sequences that are designed to cancel the effects of both types of noise (i.e., field and charge) as well as crosstalk to leading order, and provide parameters for these corrected sequences for all 24 of the single-qubit Clifford gates. We then provide an estimate of the error as a function of the noise and capacitive coupling to compare the fidelity of our corrected gates to their uncorrected versions. Dynamical error correction protocols presented in this work are important for the next generation of singlet-triplet qubit devices where coupling among many qubits will become relevant.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
On the quantum corrected gravitational collapse
International Nuclear Information System (INIS)
Torres, Ramón; Fayos, Francesc
2015-01-01
Based on a previously found general class of quantum improved exact solutions composed of non-interacting (dust) particles, we model the gravitational collapse of stars. As the modeled star collapses a closed apparent 3-horizon is generated due to the consideration of quantum effects. The effect of the subsequent emission of Hawking radiation related to this horizon is taken into consideration. Our computations lead us to argue that a total evaporation could be reached. The inferred global picture of the spacetime corresponding to gravitational collapse is devoid of both event horizons and shell-focusing singularities. As a consequence, there is no information paradox and no need of firewalls
On the quantum corrected gravitational collapse
Directory of Open Access Journals (Sweden)
Ramón Torres
2015-07-01
Full Text Available Based on a previously found general class of quantum improved exact solutions composed of non-interacting (dust particles, we model the gravitational collapse of stars. As the modeled star collapses a closed apparent 3-horizon is generated due to the consideration of quantum effects. The effect of the subsequent emission of Hawking radiation related to this horizon is taken into consideration. Our computations lead us to argue that a total evaporation could be reached. The inferred global picture of the spacetime corresponding to gravitational collapse is devoid of both event horizons and shell-focusing singularities. As a consequence, there is no information paradox and no need of firewalls.
On the quantum corrected gravitational collapse
Torres, Ramón; Fayos, Francesc
2015-07-01
Based on a previously found general class of quantum improved exact solutions composed of non-interacting (dust) particles, we model the gravitational collapse of stars. As the modeled star collapses a closed apparent 3-horizon is generated due to the consideration of quantum effects. The effect of the subsequent emission of Hawking radiation related to this horizon is taken into consideration. Our computations lead us to argue that a total evaporation could be reached. The inferred global picture of the spacetime corresponding to gravitational collapse is devoid of both event horizons and shell-focusing singularities. As a consequence, there is no information paradox and no need of firewalls.
Directory of Open Access Journals (Sweden)
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Determination and Correction of Persistent Biases in Quantum Annealers
2016-08-25
for all of the qubits. Narrowing of the bias distribution. To show the correctability of the persistent biases , we ran the experiment described above...this is a promising application for bias correction . Importantly, while the J biases determined here are in general smaller than the h biases , numerical...1Scientific RepoRts | 6:18628 | DOI: 10.1038/srep18628 www.nature.com/scientificreports Determination and correction of persistent biases in quantum
Primordial tensor modes from quantum corrected inflation
DEFF Research Database (Denmark)
Joergensen, Jakob; Sannino, Francesco; Svendsen, Ole
2014-01-01
. Finally we confront these theories with the Planck and BICEP2 data. We demonstrate that the discovery of primordial tensor modes by BICEP2 require the presence of sizable quantum departures from the $\\phi^4$-Inflaton model for the non-minimally coupled scenario which we parametrize and quantify. We...
Systematic Error of Acoustic Particle Image Velocimetry and Its Correction
Directory of Open Access Journals (Sweden)
Mickiewicz Witold
2014-08-01
Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.
Quantum corrections to nonlinear ion acoustic wave with Landau damping
Energy Technology Data Exchange (ETDEWEB)
Mukherjee, Abhik; Janaki, M. S. [Saha Institute of Nuclear Physics, Calcutta (India); Bose, Anirban [Serampore College, West Bengal (India)
2014-07-15
Quantum corrections to nonlinear ion acoustic wave with Landau damping have been computed using Wigner equation approach. The dynamical equation governing the time development of nonlinear ion acoustic wave with semiclassical quantum corrections is shown to have the form of higher KdV equation which has higher order nonlinear terms coming from quantum corrections, with the usual classical and quantum corrected Landau damping integral terms. The conservation of total number of ions is shown from the evolution equation. The decay rate of KdV solitary wave amplitude due to the presence of Landau damping terms has been calculated assuming the Landau damping parameter α{sub 1}=√(m{sub e}/m{sub i}) to be of the same order of the quantum parameter Q=ℏ{sup 2}/(24m{sup 2}c{sub s}{sup 2}L{sup 2}). The amplitude is shown to decay very slowly with time as determined by the quantum factor Q.
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
Quantum-corrected geometry of horizon vicinity
Energy Technology Data Exchange (ETDEWEB)
Park, I.Y. [Department of Applied Mathematics, Philander Smith College, Little Rock, AR (United States)
2017-12-15
We study the deformation of the horizon-vicinity geometry caused by quantum gravitational effects. Departure from the semi-classical picture is noted, and the fact that the matter part of the action comes at a higher order in Newton's constant than does the Einstein-Hilbert term is crucial for the departure. The analysis leads to a Firewall-type energy measured by an infalling observer for which quantum generation of the cosmological constant is critical. The analysis seems to suggest that the Firewall should be a part of such deformation and that the information be stored both in the horizon-vicinity and asymptotic boundary region. We also examine the behavior near the cosmological horizon. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Quantum-corrected geometry of horizon vicinity
International Nuclear Information System (INIS)
Park, I.Y.
2017-01-01
We study the deformation of the horizon-vicinity geometry caused by quantum gravitational effects. Departure from the semi-classical picture is noted, and the fact that the matter part of the action comes at a higher order in Newton's constant than does the Einstein-Hilbert term is crucial for the departure. The analysis leads to a Firewall-type energy measured by an infalling observer for which quantum generation of the cosmological constant is critical. The analysis seems to suggest that the Firewall should be a part of such deformation and that the information be stored both in the horizon-vicinity and asymptotic boundary region. We also examine the behavior near the cosmological horizon. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Energy efficiency of error correcting mechanisms for wireless communications
Havinga, Paul J.M.
We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal
Performance Errors in Weight Training and Their Correction.
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
A method for optical ground station reduce alignment error in satellite-ground quantum experiments
He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei
2018-03-01
A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.
First order error corrections in common introductory physics experiments
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Quantum gravitational corrections to the functional Schroedinger equation
International Nuclear Information System (INIS)
Kiefer, C.; Singh, T.P.
1990-10-01
We derive corrections to the Schroedinger equation which arise from the quantization of the gravitational field. This is achieved through an expansion of the full functional Wheeler-DeWitt equation with respect to powers of the Planck mass. We demonstrate that the corrections terms are independent of the factor ordering which is chosen for the gravitational kinetic term. Although the corrections are numerically extremely tiny, we show how they lead, at least in principle, to shift in the spectral lines of hydrogen type atoms. We discuss the significance of these corrections for quantum field theory near the Planck scale. (author). 35 refs
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Correction of Cadastral Error: Either the Right or Obligation of the Person Concerned?
Directory of Open Access Journals (Sweden)
Magdenko A. Y.
2014-07-01
Full Text Available The article is devoted to the institute of cadastral error. Some questions and problems of cadastral error corrections are considered. The material is based on current legislation and judicial practice.
Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE
Directory of Open Access Journals (Sweden)
Patrick SAINT-DIZIER
2015-12-01
Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.
Beyond WKB quantum corrections to Hamilton-Jacobi theory
International Nuclear Information System (INIS)
Jurisch, Alexander
2007-01-01
In this paper, we develop quantum mechanics of quasi-one-dimensional systems upon the framework of the quantum-mechanical Hamilton-Jacobi theory. We will show that the Schroedinger point of view and the Hamilton-Jacobi point of view are fully equivalent in their description of physical systems, but differ in their descriptive manner. As a main result of this, a wavefunction in Hamilton-Jacobi theory can be decomposed into travelling waves in any point in space, not only asymptotically. Using the quasi-linearization technique, we derive quantum correction functions in every order of h-bar. The quantum correction functions will remove the turning-point singularity that plagues the WKB-series expansion already in zeroth order and thus provide an extremely good approximation to the full solution of the Schroedinger equation. In the language of quantum action it is also possible to elegantly solve the connection problem without asymptotic approximations. The use of quantum action further allows us to derive an equation by which the Maslov index is directly calculable without any approximations. Stationary quantum trajectories will also be considered and thoroughly discussed
Reed-Solomon error-correction as a software patch mechanism.
Energy Technology Data Exchange (ETDEWEB)
Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2013-11-01
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Directory of Open Access Journals (Sweden)
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
Loop quantum corrected Einstein Yang-Mills black holes
Protter, Mason; DeBenedictis, Andrew
2018-05-01
In this paper, we study the homogeneous interiors of black holes possessing SU(2) Yang-Mills fields subject to corrections inspired by loop quantum gravity. The systems studied possess both magnetic and induced electric Yang-Mills fields. We consider the system of equations both with and without Wilson loop corrections to the Yang-Mills potential. The structure of the Yang-Mills Hamiltonian, along with the restriction to homogeneity, allows for an anomaly-free effective quantization. In particular, we study the bounce which replaces the classical singularity and the behavior of the Yang-Mills fields in the quantum corrected interior, which possesses topology R ×S2 . Beyond the bounce, the magnitude of the Yang-Mills electric field asymptotically grows monotonically. This results in an ever-expanding R sector even though the two-sphere volume is asymptotically constant. The results are similar with and without Wilson loop corrections on the Yang-Mills potential.
GUP parameter from quantum corrections to the Newtonian potential
Energy Technology Data Exchange (ETDEWEB)
Scardigli, Fabio, E-mail: fabio@phys.ntu.edu.tw [Dipartimento di Matematica, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano (Italy); Department of Applied Mathematics, University of Waterloo, Ontario N2L 3G1 (Canada); Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Lambiase, Gaetano, E-mail: lambiase@sa.infn.it [Dipartimento di Fisica “E.R. Caianiello”, Universita' di Salerno, I-84084 Fisciano (Italy); INFN – Gruppo Collegato di Salerno (Italy); Vagenas, Elias C., E-mail: elias.vagenas@ku.edu.kw [Theoretical Physics Group, Department of Physics, Kuwait University, P.O. Box 5969, Safat 13060 (Kuwait)
2017-04-10
We propose a technique to compute the deformation parameter of the generalized uncertainty principle by using the leading quantum corrections to the Newtonian potential. We just assume General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum. With these minimal assumptions our calculation gives, to first order, a specific numerical result. The physical meaning of this value is discussed, and compared with the previously obtained bounds on the generalized uncertainty principle deformation parameter.
Error modelling of quantum Hall array resistance standards
Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa
2018-04-01
Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.
Extending Lifetime of Wireless Sensor Networks using Forward Error Correction
DEFF Research Database (Denmark)
Donapudi, S U; Obel, C O; Madsen, Jan
2006-01-01
Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...
Quantum corrections for spinning particles in de Sitter
Energy Technology Data Exchange (ETDEWEB)
Fröb, Markus B. [Department of Mathematics, University of York, Heslington, York, YO10 5DD (United Kingdom); Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu [Departament de Física Quàntica i Astrofísica, Institut de Ciències del Cosmos (ICC), Universitat de Barcelona (UB), C/ Martí i Franquès 1, 08028 Barcelona (Spain)
2017-04-01
We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalar Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.
Detecting and correcting partial errors: Evidence for efficient control without conscious access.
Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B
2014-09-01
Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
Aurell, Erik
2018-04-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism
Aurell, Erik
2018-06-01
The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.
Quasinormal Modes of a Quantum-Corrected Schwarzschild Black ...
Indian Academy of Sciences (India)
Chunyan Wang
2017-11-27
Nov 27, 2017 ... Abstract. In this work, we investigate the electromagnetic perturbation around a quantum-corrected. Schwarzschild black hole. The complex frequencies of the quasinormal modes are evaluated by the third- order WKB approximation. The numerical results obtained showed that the complex frequencies ...
Quantum-corrected transient analysis of plasmonic nanostructures
Uysal, Ismail Enes
2017-03-08
A time domain surface integral equation (TD-SIE) solver is developed for quantum-corrected analysis of transient electromagnetic field interactions on plasmonic nanostructures with sub-nanometer gaps. “Quantum correction” introduces an auxiliary tunnel to support the current path that is generated by electrons tunneled between the nanostructures. The permittivity of the auxiliary tunnel and the nanostructures is obtained from density functional theory (DFT) computations. Electromagnetic field interactions on the combined structure (nanostructures plus auxiliary tunnel connecting them) are computed using a TD-SIE solver. Time domain samples of the permittivity and the Green function required by this solver are obtained from their frequency domain samples (generated from DFT computations) using a semi-analytical method. Accuracy and applicability of the resulting quantum-corrected solver scheme are demonstrated via numerical examples.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Transfer Error and Correction Approach in Mobile Network
Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou
With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
Opportunistic error correction for mimo-ofdm: from theory to practice
Shao, X.; Slump, Cornelis H.
Opportunistic error correction based on fountain codes is especially designed for the MIMOOFDM system. The key point of this new method is the tradeoff between the code rate of error correcting codes and the number of sub-carriers in the channel vector to be discarded. By transmitting one
An upper bound on the number of errors corrected by a convolutional code
DEFF Research Database (Denmark)
Justesen, Jørn
2000-01-01
The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
On lattices, learning with errors, cryptography, and quantum
International Nuclear Information System (INIS)
Regev, O.
2004-01-01
Full Text:Our main result is a reduction from worst-case lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the 'learning from parity with error' problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical. Using the main result, we obtain a public-key cryptosystem whose hardness is based on the worst-case quantum hardness of SVP and SIVP. Previous lattice-based public-key cryptosystems such as the one by Ajtai and Dwork were only based on unique-SVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Ο((n 2 ) and encrypting a message increases its size by Ο((n) (in previous cryptosystems these values are Ο((n 4 ) and Ο(n 2 ), respectively)
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
Pencil kernel correction and residual error estimation for quality-index-based dose calculations
International Nuclear Information System (INIS)
Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael
2006-01-01
Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method
Upper bounds on the number of errors corrected by a convolutional code
DEFF Research Database (Denmark)
Justesen, Jørn
2004-01-01
We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...
Isotopic quantum correction to liquid methanol at -30 C
Benmore, C J; Egelstaff, P A; Neuefeind, J
2002-01-01
Hydrogen/deuterium (H/D) substitution of molecular liquids in neutron diffraction is a powerful tool for structure determination. However, recent high-energy X-ray studies have found observable differences in the structures of many H and D liquids at the same temperature. In some cases this isotopic quantum effect can be corrected for by measuring the D sample at a slightly different temperature to the H sample. The example of hydroxyl isotopic substitution in liquid methanol at -30 C is presented. The magnitude of the quantum effect is shown to be significant when compared to the size of the first-order isotopic neutron-difference function. (orig.)
Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material
DEFF Research Database (Denmark)
Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka
2017-01-01
The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions......-the Feibelman d parameters-in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated...
Radiation corrections to quantum processes in an intense electromagnetic field
International Nuclear Information System (INIS)
Narozhny, N.B.
1979-01-01
A derivation of an asymptotic expression for the mass correction of order α to the electron propagator in an intense electromagnetic field is presented. It is used for the calculation of radiation corrections to the electron and photon elastic scattering amplitudes in the α 3 approximation. All proper diagrams contributing to the amplitudes and containing the above-mentioned correction to the propagator were considered, but not those which include vertex corrections. It is shown that the expansion parameter of the perturbation theory of quantum electrodynamics in intense fields grows not more slowly than αchi/sup 1/3/ at least for the electron amplitude, where chi = [(eF/sub μν/p/sub ν/) 2 ] 12 /m 3 , p is a momentum of the electron, and F is the electromagnetic field tensor
On quantum corrected Kahler potentials in F-theory
García-Etxebarria, Iñaki; Savelli, Raffaele; Shiu, Gary
2013-01-01
We work out the exact in string coupling and perturbatively exact in \\alpha' result for the vector multiplet moduli K\\"ahler potential in a specific N=2 compactification of F-theory. The well-known correction cubic in {\\alpha}' is absent, but there is a rich structure of corrections at all even orders in \\alpha'. Moreover, each of these orders independently displays an SL(2,Z) invariant set of corrections in the string coupling. This generalizes earlier findings to the case of a non-trivial elliptic fibration. Our results pave the way for the analysis of quantum corrections in the more complicated N=1 context, and may have interesting implications for the study of moduli stabilization in string theory.
GUP parameter from quantum corrections to the Newtonian potential
Directory of Open Access Journals (Sweden)
Fabio Scardigli
2017-04-01
Full Text Available We propose a technique to compute the deformation parameter of the generalized uncertainty principle by using the leading quantum corrections to the Newtonian potential. We just assume General Relativity as theory of Gravitation, and the thermal nature of the GUP corrections to the Hawking spectrum. With these minimal assumptions our calculation gives, to first order, a specific numerical result. The physical meaning of this value is discussed, and compared with the previously obtained bounds on the generalized uncertainty principle deformation parameter.
DEFF Research Database (Denmark)
Shirokov, M. E.; Shulman, Tatiana
2014-01-01
We give a detailed description of a low-dimensional quantum channel (input dimension 4, Choi rank 3) demonstrating the symmetric form of superactivation of one-shot quantum zero-error capacity. This property means appearance of a noiseless (perfectly reversible) subchannel in the tensor square...... of a channel having no noiseless subchannels. Then we describe a quantum channel with an arbitrary given level of symmetric superactivation (including the infinite value). We also show that superactivation of one-shot quantum zero-error capacity of a channel can be reformulated in terms of quantum measurement...
Spatially coupled low-density parity-check error correction for holographic data storage
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Scattering quantum random-walk search with errors
International Nuclear Information System (INIS)
Gabris, A.; Kiss, T.; Jex, I.
2007-01-01
We analyze the realization of a quantum-walk search algorithm in a passive, linear optical network. The specific model enables us to consider the effect of realistic sources of noise and losses on the search efficiency. Photon loss uniform in all directions is shown to lead to the rescaling of search time. Deviation from directional uniformity leads to the enhancement of the search efficiency compared to uniform loss with the same average. In certain cases even increasing loss in some of the directions can improve search efficiency. We show that while we approach the classical limit of the general search algorithm by introducing random phase fluctuations, its utility for searching is lost. Using numerical methods, we found that for static phase errors the averaged search efficiency displays a damped oscillatory behavior that asymptotically tends to a nonzero value
New laws of practice for learning and error correction
International Nuclear Information System (INIS)
Duffey, R.B.
2008-01-01
Relevant to design, operation and safety is the determination of risk and error rates. We provide the detailed comparison of our new learning and statistical theories for system outcome data with the traditional analysis of the learning curves obtained from tests with individual human subjects. The results provide a consistent predictive basis for the learning trends emerging all the way from timescales of many years in large technological system outcomes to actions that occur in about a tenth of a second for individual human decisions. Hence, we demonstrate both the common influence of the human element and the importance of statistical reasoning and analysis. (author)
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
DEFF Research Database (Denmark)
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo
2016-01-01
radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...
Scanner qualification with IntenCD based reticle error correction
Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan
2010-03-01
Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.
Method and apparatus for optical phase error correction
DeRose, Christopher; Bender, Daniel A.
2014-09-02
The phase value of a phase-sensitive optical device, which includes an optical transport region, is modified by laser processing. At least a portion of the optical transport region is exposed to a laser beam such that the phase value is changed from a first phase value to a second phase value, where the second phase value is different from the first phase value. The portion of the optical transport region that is exposed to the laser beam can be a surface of the optical transport region or a portion of the volume of the optical transport region. In an embodiment of the invention, the phase value of the optical device is corrected by laser processing. At least a portion of the optical transport region is exposed to a laser beam until the phase value of the optical device is within a specified tolerance of a target phase value.
Backtracking dynamics of RNA polymerase: pausing and error correction
Sahoo, Mamata; Klumpp, Stefan
2013-09-01
Transcription by RNA polymerases is frequently interrupted by pauses. One mechanism of such pauses is backtracking, where the RNA polymerase translocates backward with respect to both the DNA template and the RNA transcript, without shortening the transcript. Backtracked RNA polymerases move in a diffusive fashion and can return to active transcription either by diffusive return to the position where backtracking was initiated or by cleaving the transcript. The latter process also provides a mechanism for proofreading. Here we present some exact results for a kinetic model of backtracking and analyse its impact on the speed and the accuracy of transcription. We show that proofreading through backtracking is different from the classical (Hopfield-Ninio) scheme of kinetic proofreading. Our analysis also suggests that, in addition to contributing to the accuracy of transcription, backtracking may have a second effect: it attenuates the slow down of transcription that arises as a side effect of discriminating between correct and incorrect nucleotides based on the stepping rates.
High-speed parallel forward error correction for optical transport networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2010-01-01
This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....
Highly accurate fluorogenic DNA sequencing with information theory-based error correction.
Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi
2017-12-01
Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.
Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring
Energy Technology Data Exchange (ETDEWEB)
Bunch, S.C.; Holmes, J.
2004-01-01
We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.
Investigation of Ionospheric Spatial Gradients for Gagan Error Correction
Chandra, K. Ravi
In India, Indian Space Research Organization (ISRO) has established with an objective to develop space technology and its application to various national tasks. The national tasks include, establishment of major space systems such as Indian National Satellites (INSAT) for communication, television broadcasting and meteorological services, Indian Remote Sensing Satellites (IRS), etc. Apart from these, to cater to the needs of civil aviation applications, GPS Aided Geo Augmented Navigation (GAGAN) system is being jointly implemented along with Airports Authority of India (AAI) over the Indian region. The most predominant parameter affecting the navigation accuracy of GAGAN is ionospheric delay which is a function of total number of electrons present in one square meter cylindrical cross-sectional area in the line of site direction between the satellite and the user on the earth, i.e. Total Electron Content (TEC). In the equatorial and low latitude regions such as India, TEC is often quite high with large spatial gradients. Carrier phase data from the GAGAN network of Indian TEC Stations is used for estimating and identifying ionospheric spatial gradients inmultiple viewing directions. In this paper amongst the satellite signals arriving in multipledirections,Vertical ionospheric gradients (σVIG) are calculated, inturn spatial ionospheric gradients are identified. In addition, estimated temporal gradients, i.e. rate of TEC Index is also compared. These aspects which contribute to errors can be treated for improved GAGAN system performance.
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Gold price effect on stock market: A Markov switching vector error correction approach
Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok
2014-06-01
Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.
DeCesare, A; Secanell, M; Lagravère, M O; Carey, J
2013-01-01
The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.
Backtracking dynamics of RNA polymerase: pausing and error correction
International Nuclear Information System (INIS)
Sahoo, Mamata; Klumpp, Stefan
2013-01-01
Transcription by RNA polymerases is frequently interrupted by pauses. One mechanism of such pauses is backtracking, where the RNA polymerase translocates backward with respect to both the DNA template and the RNA transcript, without shortening the transcript. Backtracked RNA polymerases move in a diffusive fashion and can return to active transcription either by diffusive return to the position where backtracking was initiated or by cleaving the transcript. The latter process also provides a mechanism for proofreading. Here we present some exact results for a kinetic model of backtracking and analyse its impact on the speed and the accuracy of transcription. We show that proofreading through backtracking is different from the classical (Hopfield–Ninio) scheme of kinetic proofreading. Our analysis also suggests that, in addition to contributing to the accuracy of transcription, backtracking may have a second effect: it attenuates the slow down of transcription that arises as a side effect of discriminating between correct and incorrect nucleotides based on the stepping rates. (paper)
CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD
Directory of Open Access Journals (Sweden)
BUSUIOCEANU STELIANA
2013-08-01
Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.
Activation of zero-error classical capacity in low-dimensional quantum systems
Park, Jeonghoon; Heo, Jun
2018-06-01
Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.
Considerations for pattern placement error correction toward 5nm node
Yaegashi, Hidetami; Oyama, Kenichi; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Koike, Kyohei; Maslow, Mark John; Timoshkov, Vadim; Kiers, Ton; Di Lorenzo, Paolo; Fonseca, Carlos
2017-03-01
Multi-patterning has been adopted widely in high volume manufacturing as 193 immersion extension, and it becomes realistic solution of nano-order scaling. In fact, it must be key technology on single directional (1D) layout design [1] for logic devise and it becomes a major option for further scaling technique in SAQP. The requirement for patterning fidelity control is getting savior more and more, stochastic fluctuation as well as LER (Line edge roughness) has to be micro-scopic observation aria. In our previous work, such atomic order controllability was viable in complemented technique with etching and deposition [2]. Overlay issue form major potion in yield management, therefore, entire solution is needed keenly including alignment accuracy on scanner and detectability on overlay measurement instruments. As EPE (Edge placement error) was defined as the gap between design pattern and contouring of actual pattern edge, pattern registration in single process level must be considerable. The complementary patterning to fabricate 1D layout actually mitigates any process restrictions, however, multiple process step, symbolized as LELE with 193-i, is burden to yield management and affordability. Recent progress of EUV technology is remarkable, and it is major potential solution for such complicated technical issues. EUV has robust resolution limit and it must be definitely strong scaling driver for process simplification. On the other hand, its stochastic variation such like shot noise due to light source power must be resolved with any additional complemented technique. In this work, we examined the nano-order CD and profile control on EUV resist pattern and would introduce excellent accomplishments.
A median filter approach for correcting errors in a vector field
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Mermin, N. David
2007-08-01
Preface; 1. Cbits and Qbits; 2. General features and some simple examples; 3. Breaking RSA encryption with a quantum computer; 4. Searching with a quantum computer; 5. Quantum error correction; 6. Protocols that use just a few Qbits; Appendices; Index.
Quantum Gravity corrections and entropy at the Planck time
International Nuclear Information System (INIS)
Basilakos, Spyros; Vagenas, Elias C.; Das, Saurya
2010-01-01
We investigate the effects of Quantum Gravity on the Planck era of the universe. In particular, using different versions of the Generalized Uncertainty Principle and under specific conditions we find that the main Planck quantities such as the Planck time, length, mass and energy become larger by a factor of order 10−10 4 compared to those quantities which result from the Heisenberg Uncertainty Principle. However, we prove that the dimensionless entropy enclosed in the cosmological horizon at the Planck time remains unchanged. These results, though preliminary, indicate that we should anticipate modifications in the set-up of cosmology since changes in the Planck era will be inherited even to the late universe through the framework of Quantum Gravity (or Quantum Field Theory) which utilizes the Planck scale as a fundamental one. More importantly, these corrections will not affect the entropic content of the universe at the Planck time which is a crucial element for one of the basic principles of Quantum Gravity named Holographic Principle
Quantum entanglement in non-local games, graph parameters and zero-error information theory
Scarpa, G.
2013-01-01
We study quantum entanglement and some of its applications in graph theory and zero-error information theory. In Chapter 1 we introduce entanglement and other fundamental concepts of quantum theory. In Chapter 2 we address the question of how much quantum correlations generated by entanglement can
Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors
Francois-Éric Racicot; Raymond Théoret; Alain Coen
2006-01-01
In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Quantum corrections for the cubic Galileon in the covariant language
Energy Technology Data Exchange (ETDEWEB)
Saltas, Ippocratis D. [Institute of Astrophysics and Space Sciences, Faculty of Sciences, Campo Grande, PT1749-016 Lisboa (Portugal); Vitagliano, Vincenzo, E-mail: isaltas@fc.ul.pt, E-mail: vincenzo.vitagliano@ist.utl.pt [Multidisciplinary Center for Astrophysics and Department of Physics, Instituto Superior Técnico, University of Lisbon, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-05-01
We present for the first time an explicit exposition of quantum corrections within the cubic Galileon theory including the effect of quantum gravity, in a background- and gauge-invariant manner, employing the field-reparametrisation approach of the covariant effective action at 1-loop. We show that the consideration of gravitational effects in combination with the non-linear derivative structure of the theory reveals new interactions at the perturbative level, which manifest themselves as higher-operators in the associated effective action, which' relevance is controlled by appropriate ratios of the cosmological vacuum and the Galileon mass scale. The significance and concept of the covariant approach in this context is discussed, while all calculations are explicitly presented.
Separation of attractors in 1-modulus quantum corrected special geometry
Bellucci, S; Marrani, A; Shcherbakov, A
2008-01-01
We study the solutions to the N=2, d=4 Attractor Equations in a dyonic, extremal, static, spherically symmetric and asymptotically flat black hole background, in the simplest case of perturbative quantum corrected cubic Special Kahler geometry consistent with continuous axion-shift symmetry, namely in the 1-modulus Special Kahler geometry described (in a suitable special symplectic coordinate) by the holomorphic Kahler gauge-invariant prepotential F=t^3+i*lambda, with lambda real. By performing computations in the ``magnetic'' charge configuration, we find evidence for interesting phenomena (absent in the classical limit of vanishing lambda). Namely, for a certain range of the quantum parameter lambda we find a ``splitting'' of attractors, i.e. the existence of multiple solutions to the Attractor Equations for fixed supporting charge configuration. This corresponds to the existence of ``area codes'' in the radial evolution of the scalar t, determined by the various disconnected regions of the moduli space, wh...
NxRepair: error correction in de novo sequence assembly using Nextera mate pairs
Directory of Open Access Journals (Sweden)
Rebecca R. Murphy
2015-06-01
Full Text Available Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.
DEFF Research Database (Denmark)
Ashraf, Bilal; Janss, Luc; Jensen, Just
sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...
Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness
International Nuclear Information System (INIS)
Park, Jong-Kyu; Schaffer, Michael J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathan E.
2011-01-01
Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.
Error-correction coding and decoding bounds, codes, decoders, analysis and applications
Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak
2017-01-01
This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...
Quantum corrections to conductivity in graphene with vacancies
Araujo, E. N. D.; Brant, J. C.; Archanjo, B. S.; Medeiros-Ribeiro, G.; Alves, E. S.
2018-06-01
In this work, different regions of a graphene device were exposed to a 30 keV helium ion beam creating a series of alternating strips of vacancy-type defects and pristine graphene. From magnetoconductance measurements as function of temperature, density of carriers and density of strips we show that the electron-electron interaction is important to explain the logarithmic quantum corrections to the Drude conductivity in graphene with vacancies. It is known that vacancies in graphene behave as local magnetic moments that interact with the conduction electrons and leads to a logarithmic correction to the conductance through the Kondo effect. However, our work shows that it is necessary to account for the non-homogeneity of the sample to avoid misinterpretations about the Kondo physics due the difficulties in separating the electron-electron interaction from the Kondo effect.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Directory of Open Access Journals (Sweden)
Jorge Mauricio Reyes Alcalde
2017-04-01
Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Directory of Open Access Journals (Sweden)
Huiliang Cao
2016-01-01
Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
SimCommSys: taking the errors out of error-correcting code simulations
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.
Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra
2016-12-01
The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.
Improving transcriptome assembly through error correction of high-throughput sequence reads
Directory of Open Access Journals (Sweden)
Matthew D. MacManes
2013-07-01
Full Text Available The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.
Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors
Directory of Open Access Journals (Sweden)
Pham Thuy Dung
2016-12-01
Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES
Directory of Open Access Journals (Sweden)
Maria Corazon Saturnina A Castro
2017-10-01
Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices. This paper poses the major problem: How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed. Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.
Biometrics encryption combining palmprint with two-layer error correction codes
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Is a genome a codeword of an error-correcting code?
Directory of Open Access Journals (Sweden)
Luzinete C B Faria
Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies.
Clark, Kevin B
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Directory of Open Access Journals (Sweden)
Kevin Bradley Clark
2013-08-01
Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in
Short-term wind power combined forecasting based on error forecast correction
International Nuclear Information System (INIS)
Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng
2016-01-01
Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed
Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne
2018-03-01
When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
2009-01-01
-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably......This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Error Free Quantum Reading by Quasi Bell State of Entangled Coherent States
Hirota, Osamu
2017-12-01
Nonclassical states of light field have been exploited to provide marvellous results in quantum information science. Usefulness of nonclassical states in quantum information science depends on whether a physical parameter as a signal is continuous or discrete. Here we present an investigation of the potential of quasi Bell states of entangled coherent states in quantum reading of the classical digital memory which was pioneered by Pirandola (Phys.Rev.Lett.,106,090504,2011). This is a typical example of discrimination for discrete quantum parameters. We show that the quasi Bell state gives the error free performance in the quantum reading that cannot be obtained by any classical state.
Capacity estimation and verification of quantum channels with arbitrarily correlated errors.
Pfister, Corsin; Rol, M Adriaan; Mantri, Atul; Tomamichel, Marco; Wehner, Stephanie
2018-01-02
The central figure of merit for quantum memories and quantum communication devices is their capacity to store and transmit quantum information. Here, we present a protocol that estimates a lower bound on a channel's quantum capacity, even when there are arbitrarily correlated errors. One application of these protocols is to test the performance of quantum repeaters for transmitting quantum information. Our protocol is easy to implement and comes in two versions. The first estimates the one-shot quantum capacity by preparing and measuring in two different bases, where all involved qubits are used as test qubits. The second verifies on-the-fly that a channel's one-shot quantum capacity exceeds a minimal tolerated value while storing or communicating data. We discuss the performance using simple examples, such as the dephasing channel for which our method is asymptotically optimal. Finally, we apply our method to a superconducting qubit in experiment.
Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-09-01
Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.
Fringe order error in multifrequency fringe projection phase unwrapping: reason and correction.
Zhang, Chunwei; Zhao, Hong; Zhang, Lu
2015-11-10
A multifrequency fringe projection phase unwrapping algorithm (MFPPUA) is important to fringe projection profilometry, especially when a discontinuous object is measured. However, a fringe order error (FOE) may occur when MFPPUA is adopted. An FOE will result in error to the unwrapped phase. Although this kind of phase error does not spread, it brings error to the eventual 3D measurement results. Therefore, an FOE or its adverse influence should be obviated. In this paper, reasons for the occurrence of an FOE are theoretically analyzed and experimentally explored. Methods to correct the phase error caused by an FOE are proposed. Experimental results demonstrate that the proposed methods are valid in eliminating the adverse influence of an FOE.
Influence of rotation and FLR corrections on selfgravitational Jeans instability in quantum plasma
International Nuclear Information System (INIS)
Jain, Shweta; Sharma, Prerana; Chhajlani, R K
2014-01-01
In the present work, the self-gravitational instability of quantum plasma is investigated including the effects of finite Larmor radius corrections (FLR) and rotation. The formulation is done employing quantum magnetohydrodynamic (QMHD) model. The plane wave solutions are employed on the linearized perturbed QMHD set of equations to obtain the general dispersion relation. The rotation is assumed only along the z- direction. The general dispersion relation is further reduced for transverse and longitudinal directions of propagation. It is found that in transverse direction of propagation the Jeans criterion is modified due to the rotation, FLR and quantum corrections while in longitudinal direction of propagation it is observed that the Jeans criterion is modified by quantum corrections only. The growth rate of perturbation is discussed numerically including the considered parameters FLR and quantum corrections. The growth rate is observed to be modified significantly due to the quantum correction and FLR effects.
How EFL students can use Google to correct their “untreatable” written errors
Directory of Open Access Journals (Sweden)
Luc Geiller
2014-09-01
Full Text Available This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several “untreatable” written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback leads to more grammatical accuracy. In her response to Truscott (1996, Ferris (1999 explains that it would be unreasonable to abolish correction given the present state of knowledge, and that further research needed to focus on which types of errors were more amenable to which types of error correction. In her attempt to respond more effectively to her students’ errors, she made the distinction between “treatable” and “untreatable” ones: the former occur in “a patterned, rule-governed way” and include problems with verb tense or form, subject-verb agreement, run-ons, noun endings, articles, pronouns, while the latter include a variety of lexical errors, problems with word order and sentence structure, including missing and unnecessary words. Substantial research on the use of search engines as a tool for L2 learners has been carried out suggesting that the web plays an important role in fostering language awareness and learner autonomy (e.g. Shei 2008a, 2008b; Conroy 2010. According to Bathia and Richie (2009: 547, “the application of Google for language learning has just begun to be tapped.” Within the framework of this study it was assumed that the students, conversant with digital technologies and using Google and the web on a regular basis, could use various search options and the search results to self-correct their errors instead of relying on their teacher to provide direct feedback. After receiving some in-class training on how to formulate Google queries, the students were asked to use a customized Google search engine limiting searches to 28 information websites to correct up to
International Nuclear Information System (INIS)
Kim, Y.P.
1982-01-01
The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions
Kromhout, D.
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the
Some errors in respirometry of aquatic breathers: How to avoid and correct for them
DEFF Research Database (Denmark)
STEFFENSEN, JF
1989-01-01
Respirometry in closed and flow-through systems is described with the objective of pointing out problems and sources of errors involved and how to correct for them. Both closed respirometry applied to resting and active animals and intermillent-flow respirometry is described. In addition, flow...
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate
Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping
Á. Piedrafita (Álvaro); J.M. Renes (Joseph)
2017-01-01
textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
Allam, Amin; Kalnis, Panos; Solovyev, Victor
2015-01-01
accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
Czech Academy of Sciences Publication Activity Database
Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.
2013-01-01
Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error-correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188
Fast high resolution ADC based on the flash type with a special error correcting technique
Energy Technology Data Exchange (ETDEWEB)
Xiao-Zhong, Liang; Jing-Xi, Cao [Beijing Univ. (China). Inst. of Atomic Energy
1984-03-01
A fast 12 bits ADC based on the flash type with a simple special error correcting technique which can effectively compensate the level drift of the discriminators and the droop of the stretcher voltage is described. The DNL is comparable with the Wilkinson's ADC and long term drift is far better than its.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
Czech Academy of Sciences Publication Activity Database
Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.
2013-01-01
Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error -correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy
International Nuclear Information System (INIS)
Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock
2005-01-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle
Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models
Hallin, M.; van den Akker, R.; Werker, B.J.M.
2012-01-01
Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the
Retesting the Limits of Data-Driven Learning: Feedback and Error Correction
Crosthwaite, Peter
2017-01-01
An increasing number of studies have looked at the value of corpus-based data-driven learning (DDL) for second language (L2) written error correction, with generally positive results. However, a potential conundrum for language teachers involved in the process is how to provide feedback on students' written production for DDL. The study looks at…
The dynamics of entry, exit and profitability: an error correction approach for the retail industry
M.A. Carree (Martin); A.R. Thurik (Roy)
1994-01-01
textabstractWe develop a two equation error correction model to investigate determinants of and dynamic interaction between changes in profits and number of firms in retailing. An explicit distinction is made between the effects of actual competition among incumbants, new firms competition and
Universal corrections to entanglement entropy of local quantum quenches
Energy Technology Data Exchange (ETDEWEB)
David, Justin R.; Khetrapal, Surbhi [Centre for High Energy Physics, Indian Institute of Science,C.V. Raman Avenue, Bangalore 560012 (India); Kumar, S. Prem [Department of Physics, Swansea University,Singleton Park, Swansea SA2 8PP (United Kingdom)
2016-08-22
We study the time evolution of single interval Rényi and entanglement entropies following local quantum quenches in two dimensional conformal field theories at finite temperature for which the locally excited states have a finite temporal width ϵ. We show that, for local quenches produced by the action of a conformal primary field, the time dependence of Rényi and entanglement entropies at order ϵ{sup 2} is universal. It is determined by the expectation value of the stress tensor in the replica geometry and proportional to the conformal dimension of the primary field generating the local excitation. We also show that in CFTs with a gravity dual, the ϵ{sup 2} correction to the holographic entanglement entropy following a local quench precisely agrees with the CFT prediction. We then consider CFTs admitting a higher spin symmetry and turn on a higher spin chemical potential μ. We calculate the time dependence of the order ϵ{sup 2} correction to the entanglement entropy for small μ, and show that the contribution at order μ{sup 2} is universal. We verify our arguments against exact results for minimal models and the free fermion theory.
Links between N-modular redundancy and the theory of error-correcting codes
Bobin, V.; Whitaker, S.; Maki, G.
1992-01-01
N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-06-01
Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.
Correction method for the error of diamond tool's radius in ultra-precision cutting
Wang, Yi; Yu, Jing-chi
2010-10-01
The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.
FMLRC: Hybrid long read error correction using an FM-index.
Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D
2018-02-09
Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.
Neural Network Based Real-time Correction of Transducer Dynamic Errors
Roj, J.
2013-12-01
In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.
Directory of Open Access Journals (Sweden)
Qin Guo-jie
2014-08-01
Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
Bias correction of bounded location errors in presence-only data
Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.
2017-01-01
Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.
A two-dimensional matrix correction for off-axis portal dose prediction errors
International Nuclear Information System (INIS)
Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.
2013-01-01
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As
International Nuclear Information System (INIS)
Yeon, Kyu Hwang; Hong, Suc Kyoung; Um, Chung In; George, Thomas F.
2006-01-01
With quantum operators corresponding to functions of the canonical variables, Schroedinger equations are constructed for systems corresponding to classical systems connected by a general point canonical transformation. Using the operator connecting quantum states between systems before and after the transformation, the quantum correction term and ordering parameter are obtained
Effect of FLR correction on Rayleigh -Taylor instability of quantum and stratified plasma
International Nuclear Information System (INIS)
Sharma, P.K.; Tiwari, Anita; Argal, Shraddha; Chhajlani, R.K.
2013-01-01
The Rayleigh Taylor instability of stratified incompressible fluids is studied in presence of FLR Correction and quantum effects in bounded medium. The Quantum magneto hydrodynamic equations of the problem are solved by using normal mode analysis method. A dispersion relation is carried out for the case where plasma is bounded by two rigid planes z = 0 and z = h. The dispersion relation is obtained in dimensionless form to discuss the growth rate of Rayleigh Taylor instability in presence of FLR Correction and quantum effects. The stabilizing or destabilizing behavior of quantum effect and FLR correction on the Rayleigh Taylor instability is analyzed. (author)
Quantum information processing
National Research Council Canada - National Science Library
Leuchs, Gerd; Beth, Thomas
2003-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 SimulationofHamiltonians... References... 1 1 1 3 5 8 10 2 Quantum Information Processing and Error Correction with Jump Codes (G. Alber, M. Mussinger...
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Energy Technology Data Exchange (ETDEWEB)
Hyakutake, Yoshifumi [Faculty of Science, Ibaraki University,Bunkyo 2-1-1, Mito, Ibaraki, 310-8512 (Japan)
2015-09-11
We take into account higher derivative R{sup 4} corrections in M-theory and construct quantum black hole and black string solutions in 11 dimensions up to the next leading order. The quantum black string is stretching along the 11th direction and the Gregory-Laflamme instability is examined at the quantum level. Thermodynamics of the boosted quantum black hole and black string are also discussed. Especially we take the near horizon limit of the quantum black string and investigate its instability quantitatively.
Venter, Jan A; Oberholster, Andre; Schallhorn, Steven C; Pelouskova, Martina
2014-04-01
To evaluate refractive and visual outcomes of secondary piggyback intraocular lens implantation in patients diagnosed as having residual ametropia following segmental multifocal lens implantation. Data of 80 pseudophakic eyes with ametropia that underwent Sulcoflex aspheric 653L intraocular lens implantation (Rayner Intraocular Lenses Ltd., East Sussex, United Kingdom) to correct residual refractive error were analyzed. All eyes previously had in-the-bag zonal refractive multifocal intraocular lens implantation (Lentis Mplus MF30, models LS-312 and LS-313; Oculentis GmbH, Berlin, Germany) and required residual refractive error correction. Outcome measurements included uncorrected distance visual acuity, corrected distance visual acuity, uncorrected near visual acuity, distance-corrected near visual acuity, manifest refraction, and complications. One-year data are presented in this study. The mean spherical equivalent ranged from -1.75 to +3.25 diopters (D) preoperatively (mean: +0.58 ± 1.15 D) and reduced to -1.25 to +0.50 D (mean: -0.14 ± 0.28 D; P < .01). Postoperatively, 93.8% of eyes were within ±0.50 D and 98.8% were within ±1.00 D of emmetropia. The mean uncorrected distance visual acuity improved significantly from 0.28 ± 0.16 to 0.01 ± 0.10 logMAR and 78.8% of eyes achieved 6/6 (Snellen 20/20) or better postoperatively. The mean uncorrected near visual acuity changed from 0.43 ± 0.28 to 0.19 ± 0.15 logMAR. There was no significant change in corrected distance visual acuity or distance-corrected near visual acuity. No serious intraoperative or postoperative complications requiring secondary intraocular lens removal occurred. Sulcoflex lenses proved to be a predictable and safe option for correcting residual refractive error in patients diagnosed as having pseudophakia. Copyright 2014, SLACK Incorporated.
Error analysis of motion correction method for laser scanning of moving objects
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
International Nuclear Information System (INIS)
Gelman, David; Schwartz, Steven D.
2010-01-01
The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.
Energy Technology Data Exchange (ETDEWEB)
Kang, Soo Man [Dept. of Radiation Oncology, Kosin University Gospel Hospital, Busan (Korea, Republic of)
2008-09-15
To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.
International Nuclear Information System (INIS)
Kang, Soo Man
2008-01-01
To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.
Beam-Based Error Identification and Correction Methods for Particle Accelerators
AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas
2014-06-10
Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Gravitational quantum corrections in warped supersymmetric brane worlds
International Nuclear Information System (INIS)
Gregoire, T.; Rattazzi, R.; Scrucca, C.A.; Strumia, A.; Trincherini, E.
2005-01-01
We study gravitational quantum corrections in supersymmetric theories with warped extra dimensions. We develop for this a superfield formalism for linearized gauged supergravity. We show that the 1-loop effective Kahler potential is a simple functional of the KK spectrum in the presence of generic localized kinetic terms at the two branes. We also present a simple understanding of our results by showing that the leading matter effects are equivalent to suitable displacements of the branes. We then apply this general result to compute the gravity-mediated universal soft mass m 0 2 in models where the visible and the hidden sectors are sequestered at the two branes. We find that the contributions coming from radion mediation and brane-to-brane mediation are both negative in the minimal set-up, but the former can become positive if the gravitational kinetic term localized at the hidden brane has a sizable coefficient. We then compare the features of the two extreme cases of flat and very warped geometry, and give an outlook on the building of viable models
IMPACT OF TRADE OPENNESS ON OUTPUT GROWTH: CO INTEGRATION AND ERROR CORRECTION MODEL APPROACH
Directory of Open Access Journals (Sweden)
Asma Arif
2012-01-01
Full Text Available This study analyzed the long run relationship between trade openness and output growth for Pakistan using annual time series data for 1972-2010. This study follows the Engle and Granger co integration analysis and error correction approach to analyze the long run relationship between the two variables. The Error Correction Term (ECT for output growth and trade openness is significant at 5% level of significance and indicates a positive long run relation between the variables. This study has also analyzed the causality between trade openness and output growth by using granger causality test. The results of granger causality show that there is a bi-directional significant relationship between trade openness and economic growth.
Directory of Open Access Journals (Sweden)
Mahmudul Mannan Toy
2011-01-01
Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Effects of systematic phase errors on optimized quantum random-walk search algorithm
International Nuclear Information System (INIS)
Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun
2015-01-01
This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)
Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.
Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold
2002-02-01
The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).
Haptic Data Processing for Teleoperation Systems: Prediction, Compression and Error Correction
Lee, Jae-young
2013-01-01
This thesis explores haptic data processing methods for teleoperation systems, including prediction, compression, and error correction. In the proposed haptic data prediction method, unreliable network conditions, such as time-varying delay and packet loss, are detected by a transport layer protocol. Given the information from the transport layer, a Bayesian approach is introduced to predict position and force data in haptic teleoperation systems. Stability of the proposed method within stoch...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal
2013-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal
2012-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...
Directory of Open Access Journals (Sweden)
Christian NZENGUE PEGNET
2011-07-01
Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.
Khairul Jauhari; Achmad Widodo; Ismoyo Haryanto
2015-01-01
In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by resi...
Directory of Open Access Journals (Sweden)
Rosa M. Manchón
2010-06-01
Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la
Directory of Open Access Journals (Sweden)
Sarunya Kanjanawattana
2017-01-01
Full Text Available literature. Extracting graph information clearly contributes to readers, who are interested in graph information interpretation, because we can obtain significant information presenting in the graph. A typical tool used to transform image-based characters to computer editable characters is optical character recognition (OCR. Unfortunately, OCR cannot guarantee perfect results, because it is sensitive to noise and input quality. This becomes a serious problem because misrecognition provides misunderstanding information to readers and causes misleading communication. In this study, we present a novel method for OCR-error correction based on bar graphs using semantics, such as ontologies and dependency parsing. Moreover, we used a graph component extraction proposed in our previous study to omit irrelevant parts from graph components. It was applied to clean and prepare input data for this OCR-error correction. The main objectives of this paper are to extract significant information from the graph using OCR and to correct OCR errors using semantics. As a result, our method provided remarkable performance with the highest accuracies and F-measures. Moreover, we examined that our input data contained less of noise because of an efficiency of our graph component extraction. Based on the evidence, we conclude that our solution to the OCR problem achieves the objectives.
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
International Nuclear Information System (INIS)
Rota Kops, Elena; Herzog, Hans
2013-01-01
Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled
Energy Technology Data Exchange (ETDEWEB)
Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)
2011-11-10
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
Rota Kops, Elena; Herzog, Hans
2013-02-01
AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal
Error correcting code with chip kill capability and power saving enhancement
Energy Technology Data Exchange (ETDEWEB)
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Intrinsic errors in transporting a single-spin qubit through a double quantum dot
Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.
2017-07-01
Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2010-06-01
The dynamics of correct and error responses in a variant of delayed free recall were examined in the present study. In the externalized free recall paradigm, participants were presented with lists of words and were instructed to subsequently recall not only the words that they could remember from the most recently presented list, but also any other words that came to mind during the recall period. Externalized free recall is useful for elucidating both sampling and postretrieval editing processes, thereby yielding more accurate estimates of the total number of error responses, which are typically sampled and subsequently edited during free recall. The results indicated that the participants generally sampled correct items early in the recall period and then transitioned to sampling more erroneous responses. Furthermore, the participants generally terminated their search after sampling too many errors. An examination of editing processes suggested that the participants were quite good at identifying errors, but this varied systematically on the basis of a number of factors. The results from the present study are framed in terms of generate-edit models of free recall.
Image enhancement by spectral-error correction for dual-energy computed tomography.
Park, Kyung-Kook; Oh, Chang-Hyun; Akay, Metin
2011-01-01
Dual-energy CT (DECT) was reintroduced recently to use the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between low and high energy images or measurements, so that it is difficult to acquire accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, an image enhancement technique for DECT is proposed, based on the fact that the attenuation of a higher density material decreases more rapidly as X-ray energy increases. We define as spectral error the case when a pixel pair of low and high energy images deviates far from the expected attenuation trend. After analyzing the spectral-error sources of DECT images, we propose a DECT image enhancement method, which consists of three steps: water-reference offset correction, spectral-error correction, and anti-correlated noise reduction. It is the main idea of this work that makes spectral errors distributed like random noise over the true attenuation and suppressed by the well-known anti-correlated noise reduction. The proposed method suppressed noise of liver lesions and improved contrast between liver lesions and liver parenchyma in DECT contrast-enhanced abdominal images and their two-material decomposition.
International Nuclear Information System (INIS)
Pisani, Laura; Lockman, David; Jaffray, David; Yan Di; Martinez, Alvaro; Wong, John
2000-01-01
Purpose: We hypothesize that the difference in image quality between the traditional kilovoltage (kV) prescription radiographs and megavoltage (MV) treatment radiographs is a major factor hindering our ability to accurately measure, thus correct, setup error in radiation therapy. The objective of this work is to study the accuracy of on-line correction of setup errors achievable using either kV- or MV-localization (i.e., open-field) radiographs. Methods and Materials: Using a gantry mounted kV and MV dual-beam imaging system, the accuracy of on-line measurement and correction of setup error using electronic kV- and MV-localization images was examined based on anthropomorphic phantom and patient imaging studies. For the phantom study, the user's ability to accurately detect known translational shifts was analyzed. The clinical study included 14 patients with disease in the head and neck, thoracic, and pelvic regions. For each patient, 4 orthogonal kV radiographs acquired during treatment simulation from the right lateral, anterior-to-posterior, left lateral, and posterior-to-anterior directions were employed as reference prescription images. Two-dimensional (2D) anatomic templates were defined on each of the 4 reference images. On each treatment day, after positioning the patient for treatment, 4 orthogonal electronic localization images were acquired with both kV and 6-MV photon beams. On alternate weeks, setup errors were determined from either the kV- or MV-localization images but not both. Setup error was determined by aligning each 2D template with the anatomic information on the corresponding localization image, ignoring rotational and nonrigid variations. For each set of 4 orthogonal images, the results from template alignments were averaged. Based on the results from the phantom study and a parallel study of the inter- and intraobserver template alignment variability, a threshold for minimum correction was set at 2 mm in any direction. Setup correction was
International Nuclear Information System (INIS)
Margaritondo, G
2003-01-01
Quantum physics is the backbone of modern science: therefore, a correct first step is essential for students' success in many different disciplines. Unfortunately, many didactic approaches are still complicated, potentially confusing and often historically wrong. An alternate, simple, stimulating and historically correct approach is outlined here
Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm
International Nuclear Information System (INIS)
Xiao Li; Jones, Jonathan A.
2005-01-01
We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored
International Nuclear Information System (INIS)
Wu Yan; Shannon, Mark A.
2006-01-01
The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed
Energy Technology Data Exchange (ETDEWEB)
Moon, Hyeon Seok; Jeong, Deok Yang; Do, Gyeong Min; Lee, Yeong Cheol; KIm, Sun Myung; Kim, Young Bun [Dept. of Radiation Oncology, Korea University Guro Hospital, Seoul (Korea, Republic of)
2016-12-15
The purpose of this study was to evaluate the Retro recon in SRS planning using BranLAB when stereotactic location error occurs by metal artifact. By CT simulator, image were acquired from head phantom(CIRS, PTW, USA). To observe stereotactic location recognizing and beam hardening, CT image were approved by SRS planning system(BrainLAB, Feldkirchen, Germany). In addition, we compared acquisition image(1.25mm slice thickness) and Retro recon image(using for 2.5 mm, 5mm slice thickness). To evaluate these three images quality, the test were performed by AAPM phantom study. In patient, it was verified stereotactic location error. All the location recognizing error did not occur in scanned image of phantom. AAPM phantom scan images all showed the same trend. Contrast resolution and Spatial resolution are under 6.4 mm, 1.0 mm. In case of noise and uniformity, under 11, 5 of HU were measured. In patient, the stereotactic location error was not occurred at reconstructive image. For BrainLAB planning, using Retro recon were corrected stereotactic error at beam hardening. Retro recon may be the preferred modality for radiation treatment planning and approving image quality.
Andreev, Pavel A.
2018-04-01
Two kinds of quantum electrodynamic radiative corrections to electromagnetic interactions and their influence on the properties of highly dense quantum plasmas are considered. Linear radiative correction to the Coulomb interaction is considered. Its contribution in the spectrum of the Langmuir waves is presented. The second kind of radiative corrections are related to the nonlinearity of the Maxwell equations for the strong electromagnetic field. Their contribution in the spectrum of transverse waves of magnetized plasmas is briefly discussed. At the consideration of the Langmuir wave spectrum, we included the effect of different distributions of the spin-up and spin-down electrons revealing in the Fermi pressure shift.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface.
McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M
2018-01-01
The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.
Correction of clock errors in seismic data using noise cross-correlations
Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline
2017-04-01
Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock
Correction for dynamic bias error in transmission measurements of void fraction
International Nuclear Information System (INIS)
Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.
2012-01-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.
Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.
Song, Li; Florea, Liliana
2015-01-01
Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.
Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-08-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.
The effect of quantum correction on plasma electron heating in ultraviolet laser interaction
Energy Technology Data Exchange (ETDEWEB)
Zare, S.; Sadighi-Bonabi, R., E-mail: Sadighi@sharif.ir; Anvari, A. [Department of Physics, Sharif University of Technology, P.O. Box 11365-9567, Tehran (Iran, Islamic Republic of); Yazdani, E. [Department of Energy Engineering and Physics, Amirkabir University of Technology, P.O. Box 15875-4413, Tehran (Iran, Islamic Republic of); Hora, H. [Department of Theoretical Physics, University of New South Wales, Sydney 2052 (Australia)
2015-04-14
The interaction of the sub-picosecond UV laser in sub-relativistic intensities with deuterium is investigated. At high plasma temperatures, based on the quantum correction in the collision frequency, the electron heating and the ion block generation in plasma are studied. It is found that due to the quantum correction, the electron heating increases considerably and the electron temperature uniformly reaches up to the maximum value of 4.91 × 10{sup 7 }K. Considering the quantum correction, the electron temperature at the laser initial coupling stage is improved more than 66.55% of the amount achieved in the classical model. As a consequence, by the modified collision frequency, the ion block is accelerated quicker with higher maximum velocity in comparison with the one by the classical collision frequency. This study proves the necessity of considering a quantum mechanical correction in the collision frequency at high plasma temperatures.
Non-perturbative treatment of relativistic quantum corrections in large Z atoms
International Nuclear Information System (INIS)
Dietz, K.; Weymans, G.
1983-09-01
Renormalised g-Hartree-Dirac equations incorporating Dirac sea contributions are derived. Their implications for the non-perturbative, selfconsistent calculation of quantum corrections in large Z atoms are discussed. (orig.)
Quantum degeneracy corrections to plasma line emission and to Saha equation
International Nuclear Information System (INIS)
Molinari, V.G.; Mostacci, D.; Rocchi, F.; Sumini, M.
2003-01-01
The effect of quantum degeneracy on the electron collisional excitation is investigated, and its effects on line emission evaluated for applications to spectroscopy of dense, cold plasmas. A correction to Saha equation for weakly-degenerate plasmas is also presented
Feasibility of self-correcting quantum memory and thermal stability of topological order
International Nuclear Information System (INIS)
Yoshida, Beni
2011-01-01
Recently, it has become apparent that the thermal stability of topologically ordered systems at finite temperature, as discussed in condensed matter physics, can be studied by addressing the feasibility of self-correcting quantum memory, as discussed in quantum information science. Here, with this correspondence in mind, we propose a model of quantum codes that may cover a large class of physically realizable quantum memory. The model is supported by a certain class of gapped spin Hamiltonians, called stabilizer Hamiltonians, with translation symmetries and a small number of ground states that does not grow with the system size. We show that the model does not work as self-correcting quantum memory due to a certain topological constraint on geometric shapes of its logical operators. This quantum coding theoretical result implies that systems covered or approximated by the model cannot have thermally stable topological order, meaning that systems cannot be stable against both thermal fluctuations and local perturbations simultaneously in two and three spatial dimensions. - Highlights: → We define a class of physically realizable quantum codes. → We determine their coding and physical properties completely. → We establish the connection between topological order and self-correcting memory. → We find they do not work as self-correcting quantum memory. → We find they do not have thermally stable topological order.
Range walk error correction and modeling on Pseudo-random photon counting system
Shen, Shanshan; Chen, Qian; He, Weiji
2017-08-01
Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
A modified error correction protocol for CCITT signalling system no. 7 on satellite links
Kreuer, Dieter; Quernheim, Ulrich
1991-10-01
Comite Consultatif International des Telegraphe et Telephone (CCITT) Signalling System No. 7 (SS7) provides a level 2 error correction protocol particularly suited for links with propagation delays higher than 15 ms. Not being originally designed for satellite links, however, the so called Preventive Cyclic Retransmission (PCR) Method only performs well on satellite channels when traffic is low. A modified level 2 error control protocol, termed Fix Delay Retransmission (FDR) method is suggested which performs better at high loads, thus providing a more efficient use of the limited carrier capacity. Both the PCR and the FDR methods are investigated by means of simulation and results concerning throughput, queueing delay, and system delay, respectively. The FDR method exhibits higher capacity and shorter delay than the PCR method.
International Nuclear Information System (INIS)
Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun
2014-01-01
Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Identifying and Correcting Timing Errors at Seismic Stations in and around Iran
International Nuclear Information System (INIS)
Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee
2017-01-01
A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.
Error and corrections with scintigraphic measurement of gastric emptying of solid foods
Energy Technology Data Exchange (ETDEWEB)
Meyer, J.H.; Van Deventer, G.; Graham, L.S.; Thomson, J.; Thomasson, D.
1983-03-01
Previous methods for correction of depth used geometric means of simultaneously obtained anterior and posterior counts. The present study compares this method with a new one that uses computations of depth based on peak-to-scatter (P:S) ratios. Six normal volunteers were fed a meal of beef stew, water, and chicken liver that had been labeled in vivo with both In-113m and Tc-99m. Gastric emptying was followed at short intervals with anterior counts of peak and scattered radiation for each nuclide, as well as posteriorly collected peak counts from the gastric ROI. Depth of the nuclides was estimated by the P:S method as well as the older method. Both gave similar results. Errors from septal penetration or scatter proved to be a significantly larger problem than errors from changes in depth.
Correction of phase-shifting error in wavelength scanning digital holographic microscopy
Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-05-01
Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
Otero-de-la-Roza, A; Johnson, Erin R; DiLabio, Gino A
2014-12-09
Halogen bonds are formed when a Lewis base interacts with a halogen atom in a different molecule, which acts as an electron acceptor. Due to its charge transfer component, halogen bonding is difficult to model using many common density-functional approximations because they spuriously overstabilize halogen-bonded dimers. It has been suggested that dispersion-corrected density functionals are inadequate to describe halogen bonding. In this work, we show that the exchange-hole dipole moment (XDM) dispersion correction coupled with functionals that minimize delocalization error (for instance, BH&HLYP, but also other half-and-half functionals) accurately model halogen-bonded interactions, with average errors similar to other noncovalent dimers with less charge-transfer effects. The performance of XDM is evaluated for three previously proposed benchmarks (XB18 and XB51 by Kozuch and Martin, and the set proposed by Bauzá et al.) spanning a range of binding energies up to ∼50 kcal/mol. The good performance of BH&HLYP-XDM is comparable to M06-2X, and extends to the "extreme" cases in the Bauzá set. This set contains anionic electron donors where charge transfer occurs even at infinite separation, as well as other charge transfer dimers belonging to the pnictogen and chalcogen bonding classes. We also show that functional delocalization error results in an overly delocalized electron density and exact-exchange hole. We propose intermolecular Bader delocalization indices as an indicator of both the donor-acceptor character of an intermolecular interaction and the delocalization error coming from the underlying functional.
Supersymmetric quantum corrections and Poisson-Lie T-duality
International Nuclear Information System (INIS)
Assaoui, F.; Lhallabi, T.; Abdus Salam International Centre for Theoretical Physics, Trieste
2000-07-01
The quantum actions of the (4,4) supersymmetric non-linear sigma model and its dual in the Abelian case are constructed by using the background superfield method. The propagators of the quantum superfield and its dual and the gauge fixing actions of the original and dual (4,4) supersymmetric sigma models are determined. On the other hand, the BRST transformations are used to obtain the quantum dual action of the (4,4) supersymmetric nonlinear sigma model in the sense of Poisson-Lie T-duality. (author)
A fingerprint key binding algorithm based on vector quantization and error correction
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
Likelihood-based inference for cointegration with nonlinear error-correction
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders Christian
2010-01-01
We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...
On the roles of direct feedback and error field correction in stabilizing resistive-wall modes
International Nuclear Information System (INIS)
In, Y.; Bogatu, I.N.; Kim, J.S.; Garofalo, A.M.; Jackson, G.L.; La Haye, R.J.; Schaffer, M.J.; Strait, E.J.; Lanctot, M.J.; Reimerdes, H.; Marrelli, L.; Martin, P.; Okabayashi, M.
2010-01-01
Active feedback control in the DIII-D tokamak has fully stabilized the current-driven ideal kink resistive-wall mode (RWM). While complete stabilization is known to require both low frequency error field correction (EFC) and high frequency feedback, unambiguous identification has been made about the distinctive role of each in a fully feedback-stabilized discharge. Specifically, the role of direct RWM feedback, which nullifies the RWM perturbation in a time scale faster than the mode growth time, cannot be replaced by low frequency EFC, which minimizes the lack of axisymmetry of external magnetic fields. (letter)
Confidentiality of 2D Code using Infrared with Cell-level Error Correction
Directory of Open Access Journals (Sweden)
Nobuyuki Teraura
2013-03-01
Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2018-04-01
We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.
Decoy-state quantum key distribution with both source errors and statistical fluctuations
International Nuclear Information System (INIS)
Wang Xiangbin; Yang Lin; Peng Chengzhi; Pan Jianwei
2009-01-01
We show how to calculate the fraction of single-photon counts of the 3-intensity decoy-state quantum cryptography faithfully with both statistical fluctuations and source errors. Our results rely only on the bound values of a few parameters of the states of pulses.
Error Correction of Meteorological Data Obtained with Mini-AWSs Based on Machine Learning
Directory of Open Access Journals (Sweden)
Ji-Hun Ha
2018-01-01
Full Text Available Severe weather events occur more frequently due to climate change; therefore, accurate weather forecasts are necessary, in addition to the development of numerical weather prediction (NWP of the past several decades. A method to improve the accuracy of weather forecasts based on NWP is the collection of more meteorological data by reducing the observation interval. However, in many areas, it is economically and locally difficult to collect observation data by installing automatic weather stations (AWSs. We developed a Mini-AWS, much smaller than AWSs, to complement the shortcomings of AWSs. The installation and maintenance costs of Mini-AWSs are lower than those of AWSs; Mini-AWSs have fewer spatial constraints with respect to the installation than AWSs. However, it is necessary to correct the data collected with Mini-AWSs because they might be affected by the external environment depending on the installation area. In this paper, we propose a novel error correction of atmospheric pressure data observed with a Mini-AWS based on machine learning. Using the proposed method, we obtained corrected atmospheric pressure data, reaching the standard of the World Meteorological Organization (WMO; ±0.1 hPa, and confirmed the potential of corrected atmospheric pressure data as an auxiliary resource for AWSs.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface
Directory of Open Access Journals (Sweden)
McCafferty SJ
2018-01-01
Full Text Available Sean J McCafferty,1–4 Eniko T Enikov,5 Jim Schwiegerling,2,3 Sean M Ashley1,3 1Intuor Technologies, 2Department of Ophthalmology, University of Arizona College of Medicine, 3University of Arizona College of Optical Science, 4Arizona Eye Consultants, 5Department of Mechanical and Aerospace, University of Arizona College of Engineering, Tucson, AZ, USA Purpose: The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT prism and in a correcting applanation tonometry surface (CATS prism.Methods: The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms.Results: The CATS prism tear film adhesion error (2.74±0.21 mmHg was significantly less than the GAT prism (4.57±0.18 mmHg, p<0.001. Tear film adhesion error was independent of applanation mire thickness (R2=0.09, p=0.04. Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p<0.001. Cadaver eye validation indicated the CATS prism’s tear film adhesion error (1.40±0.51 mmHg was significantly less than that of the GAT prism (3.30±0.38 mmHg; p=0.002.Conclusion: Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error by ~41%. Fluorescein solution increases the tear film adhesion compared to
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
Liebovitch, Larry
1998-03-01
evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
International Nuclear Information System (INIS)
Doherty, W.
2015-01-01
A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer
Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.
2009-04-01
In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar
Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor
Directory of Open Access Journals (Sweden)
Fang Tang
2014-01-01
Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.
Two-step single slope/SAR ADC with error correction for CMOS image sensor.
Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin
2014-01-01
Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μ m(2) · cycles/sample.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Black Hole Entropy with and without Log Correction in Loop Quantum Gravity
International Nuclear Information System (INIS)
Mitra, P.
2014-01-01
Earlier calculations of black hole entropy in loop quantum gravity have given a term proportional to the area with a correction involving the logarithm of the area when the area eigenvalue is close to the classical area. However the calculations yield an entropy proportional to the area eigenvalue with no such correction when the area eigenvalue is large compared to the classical area
Phase correction and error estimation in InSAR time series analysis
Zhang, Y.; Fattahi, H.; Amelung, F.
2017-12-01
During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same
Uysal, Ismail Enes
2015-10-26
Analysis of electromagnetic interactions on nanodevices can oftentimes be carried out accurately using “traditional” electromagnetic solvers. However, if a gap of sub-nanometer scale exists between any two surfaces of the device, quantum-mechanical effects including tunneling should be taken into account for an accurate characterization of the device\\'s response. Since the first-principle quantum simulators can not be used efficiently to fully characterize a typical-size nanodevice, a quantum corrected electromagnetic model has been proposed as an efficient and accurate alternative (R. Esteban et al., Nat. Commun., 3(825), 2012). The quantum correction is achieved through an effective layered medium introduced into the gap between the surfaces. The dielectric constant of each layer is obtained using a first-principle quantum characterization of the gap with a different dimension.
Quantum computing with trapped ions
International Nuclear Information System (INIS)
Haeffner, H.; Roos, C.F.; Blatt, R.
2008-01-01
Quantum computers hold the promise of solving certain computational tasks much more efficiently than classical computers. We review recent experimental advances towards a quantum computer with trapped ions. In particular, various implementations of qubits, quantum gates and some key experiments are discussed. Furthermore, we review some implementations of quantum algorithms such as a deterministic teleportation of quantum information and an error correction scheme
Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc
International Nuclear Information System (INIS)
Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.
1983-11-01
Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10 30 cm -2 sec -1 requires focusing the interaction bunches to a spot size in the micrometer (μm) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-07-24
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Directory of Open Access Journals (Sweden)
Xingming Sun
2015-07-01
Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Directory of Open Access Journals (Sweden)
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
A correction for emittance-measurement errors caused by finite slit and collector widths
International Nuclear Information System (INIS)
Connolly, R.C.
1992-01-01
One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs
International Nuclear Information System (INIS)
Glasure, Yong U.; Lee, Aie-Rie
1998-01-01
This paper examines the causality issue between energy consumption and GDP for South Korea and Singapore, with the aid of cointegration and error-correction modeling. Results of the cointegration and error-correction models indicate bidirectional causality between GDP and energy consumption for both South Korea and Singapore. However, results of the standard Granger causality tests show no causal relationship between GDP and energy consumption for South Korea and unidirectional causal relationship from energy consumption to GDP for Singapore
GQ corrections in the circuit theory of quantum transport
Campagnano, G.; Nazarov, Y.V.
2006-01-01
We develop a finite-element technique that allows one to evaluate correction of the order of GQ to various transport characteristics of arbitrary nanostructures. Common examples of such corrections are the weak-localization effect on conductance and universal conductance fluctuations. Our approach,
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
International Nuclear Information System (INIS)
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy.
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy
2012-06-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.
Quantum corrections to Drell-Yan production of Z bosons
Energy Technology Data Exchange (ETDEWEB)
Shcherbakova, Elena S.
2011-08-15
In this thesis, we present higher-order corrections to inclusive Z-boson hadroproduction via the Drell-Yan mechanism, h{sub 1}+h{sub 2}{yields}Z+X, at large transverse momentum (q{sub T}). Specifically, we include the QED, QCD and electroweak corrections of orders O({alpha}{sub S}{alpha}, {alpha}{sub S}{sup 2}{alpha}, {alpha}{sub S}{alpha}{sup 2}). We work in the framework of the Standard Model and adopt the MS scheme of renormalization and factorization. The cross section of Z-boson production has been precisely measured at various hadron-hadron colliders, including the Tevatron and the LHC. Our calculations will help to calibrate and monitor the luminosity and to estimate of backgrounds of the hadron-hadron interactions more reliably. Besides the total cross section, we study the distributions in the transverse momentum and the rapidity (y) of the Z boson, appropriate for Tevatron and LHC experimental conditions. Investigating the relative sizes fo the various types of corrections by means of the factor K = {sigma}{sub tot} / {sigma}{sub Born}, we find that the QCS corrections of order {alpha}{sub S}{sup 2}{alpha} are largest in general and that the electroweak corrections of order {alpha}{sub S}{alpha}{sup 2} play an important role at large values of q{sub T}, while the QED corrections at the same order are small, of order 2% or below. We also compare out results with the existing literature. We correct a few misprints in the original calculation of the QCD corrections, and find the published electroweak correction to be incomplete. Our results for the QED corrections are new. (orig.)
Error-resistant distributed quantum computation in a trapped ion chain
International Nuclear Information System (INIS)
Braungardt, Sibylle; Sen, Aditi; Sen, Ujjwal; Lewenstein, Maciej
2007-01-01
We consider experimentally feasible chains of trapped ions with pseudospin 1/2 and find models that can potentially be used to implement error-resistant quantum computation. Similar in spirit to classical neural networks, the error resistance of the system is achieved by encoding the qubits distributed over the whole system. We therefore call our system a quantum neural network and present a quantum neural network model of quantum computation. Qubits are encoded in a few quasi degenerated low-energy levels of the whole system, separated by a large gap from the excited states and large energy barriers between themselves. We investigate protocols for implementing a universal set of quantum logic gates in the system by adiabatic passage of a few low-lying energy levels of the whole system. Naturally appearing and potentially dangerous distributed noise in the system leaves the fidelity of the computation virtually unchanged, if it is not too strong. The computation is also naturally resilient to local perturbations of the spins
Directory of Open Access Journals (Sweden)
Amir H Pakpour
2013-01-01
Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.
5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.
2010-01-01
... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...
Quantum corrections to thermodynamics of quasitopological black holes
Directory of Open Access Journals (Sweden)
Sudhaker Upadhyay
2017-12-01
Full Text Available Based on the modification to area-law due to thermal fluctuation at small horizon radius, we investigate the thermodynamics of charged quasitopological and charged rotating quasitopological black holes. In particular, we derive the leading-order corrections to the Gibbs free energy, charge and total mass densities. In order to analyze the behavior of the thermal fluctuations on the thermodynamics of small black holes, we draw a comparative analysis between the first-order corrected and original thermodynamical quantities. We also examine the stability and bound points of such black holes under effect of leading-order corrections.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Can I cancel my FERS election if my qualifying retirement coverage error was previously corrected and I now have an election opportunity under... ERRONEOUS RETIREMENT COVERAGE CORRECTIONS ACT Making an Election Fers Elections § 839.622 Can I cancel my...
Publisher Correction: Ultrafast quantum beats of anisotropic excitons in atomically thin ReS2.
Sim, Sangwan; Lee, Doeon; Trifonov, Artur V; Kim, Taeyoung; Cha, Soonyoung; Sung, Ji Ho; Cho, Sungjun; Shim, Wooyoung; Jo, Moon-Ho; Choi, Hyunyong
2018-03-22
In the originally published HTML and PDF versions of this article, Figs. 3g and 4d contained typesetting errors affecting the way the data points were displayed. This has now been corrected in the HTML and PDF versions.
MODEL PERMINTAAN UANG DI INDONESIA DENGAN PENDEKATAN VECTOR ERROR CORRECTION MODEL
Directory of Open Access Journals (Sweden)
imam mukhlis
2016-09-01
Full Text Available This research aims to estimate the demand for money model in Indonesia for 2005.2-2015.12. The variables used in this research are ; demand for money, interest rate, inflation, and exchange rate (IDR/US$. The stationary test with ADF used to test unit root in the data. Cointegration test applied to estimate the long run relationship berween variables. This research employed the Vector Error Correction Model (VECM to estimate the money demand model in Indonesia. The results showed that all the data was stationer at the difference level (1%. There were long run relationship between interest rate, inflation and exchange rate to demand for money in Indonesia. The VECM model could not explaine interaction between explanatory variables to independent variables. In the short run, there were not relationship between interest rate, inflation and exchange rate to demand for money in Indonesia for 2005.2-2015.12
A Conceptual Design Study for the Error Field Correction Coil Power Supply in JT-60SA
International Nuclear Information System (INIS)
Matsukawa, M.; Shimada, K.; Yamauchi, K.; Gaio, E.; Ferro, A.; Novello, L.
2013-01-01
This paper describes a conceptual design study for the circuit configuration of the Error Field Correction Coil (EFCC) power supply (PS) to maximize the expected performance with reasonable cost in JT-60SA. The EFCC consists of eighteen sector coils installed inside the vacuum vessel, six in the toroidal direction and three in the poloidal direction, each one rated for 30 kA-turn. As a result, star point connection is proposed for each group of six EFCC coils installed cyclically in the toroidal direction for decoupling with poloidal field coils. In addition, a six phase inverter which is capable of controlling each phase current was chosen as PS topology to ensure higher flexibility of operation with reasonable cost.
Correcting the error in neutron moisture probe measurements caused by a water density gradient
International Nuclear Information System (INIS)
Wilson, D.J.
1988-01-01
If a neutron probe lies in or near a water density gradient, the probe may register a water density different to that at the measuring point. The effect of a thin stratum of soil containing an excess or depletion of water at various distances from a probe in an otherwise homogeneous system has been calculated, producing an 'importance' curve. The effect of these strata can be integrated over the soil region in close proximity to the probe resulting in the net effect of the presence of a water density gradient. In practice, the probe is scanned through the point of interest and the count rate at that point is corrected for the influence of the water density on each side of it. An example shows that the technique can reduce an error of 10 per cent to about 2 per cent
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
DEFF Research Database (Denmark)
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which...... already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...
Algebra for applications cryptography, secret sharing, error-correcting, fingerprinting, compression
Slinko, Arkadii
2015-01-01
This book examines the relationship between mathematics and data in the modern world. Indeed, modern societies are awash with data which must be manipulated in many different ways: encrypted, compressed, shared between users in a prescribed manner, protected from an unauthorised access and transmitted over unreliable channels. All of these operations can be understood only by a person with knowledge of basics in algebra and number theory. This book provides the necessary background in arithmetic, polynomials, groups, fields and elliptic curves that is sufficient to understand such real-life applications as cryptography, secret sharing, error-correcting, fingerprinting and compression of information. It is the first to cover many recent developments in these topics. Based on a lecture course given to third-year undergraduates, it is self-contained with numerous worked examples and exercises provided to test understanding. It can additionally be used for self-study.
Directory of Open Access Journals (Sweden)
Akhsyim Afandi
2017-03-01
Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.
Estimating oil product demand in Indonesia using a cointegrating error correction model
International Nuclear Information System (INIS)
Dahl, C.
2001-01-01
Indonesia's long oil production history and large population mean that Indonesian oil reserves, per capita, are the lowest in OPEC and that, eventually, Indonesia will become a net oil importer. Policy-makers want to forestall this day, since oil revenue comprised around a quarter of both the government budget and foreign exchange revenues for the fiscal years 1997/98. To help policy-makers determine how economic growth and oil-pricing policy affect the consumption of oil products, we estimate the demand for six oil products and total petroleum consumption, using an error correction-cointegration approach, and compare it with estimates on a lagged endogenous model using data for 1970-95. (author)
Directory of Open Access Journals (Sweden)
Sasanti Widyawati
2016-05-01
Full Text Available AbstractBank loans has an important role in financing the national economy and driving force of economic growth.Therefore, credit growth must be balanced. However, the condition show that commercial bank credit growthslowed back.Using the method of Error Correction Model (ECM Domowitz - El Badawi, the study analyze theimpact of short-term and long-term independent variables to determine the credit growth in Indonesia financialsector. The results show that, in the short term only non performing loans are significant negative effect onthe working capital loans growth. For long-term, working capital loan interest rates have a significant negativeeffect, third party funds growth have a significant positive effect and inflation have a significant negativeeffect.
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.
1996-01-01
Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.
The importance of matched poloidal spectra to error field correction in DIII-D
Energy Technology Data Exchange (ETDEWEB)
Paz-Soldan, C., E-mail: paz-soldan@fusion.gat.com; Lanctot, M. J.; Buttery, R. J.; La Haye, R. J.; Strait, E. J. [General Atomics, P.O. Box 85608, San Diego, California 92121 (United States); Logan, N. C.; Park, J.-K.; Solomon, W. M. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Shiraki, D.; Hanson, J. M. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 (United States)
2014-07-15
Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable “dominant” mode of the plasma is nulled at each toroidal mode number (n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios, the plasma is found to be, respectively, 7× and 20× less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated, any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.
The use of concept maps to detect and correct concept errors (mistakes
Directory of Open Access Journals (Sweden)
Ladislada del Puy Molina Azcárate
2013-02-01
Full Text Available This work proposes to detect and correct concept errors (EECC to obtain Meaningful Learning (AS. The Conductive Model does not respond to the demand of meaningful learning that implies gathering thought, feeling and action to lead students up to both compromise and responsibility. In order to respond to the society competition about knowledge and information it is necessary to change the way of teaching and learning (from conductive model to constructive model. In this context it is important not only to learn meaningfully but also to create knowledge so as to developed dissertive, creative and critical thought, and the EECC are and obstacle to cope with this. This study tries to get ride of EECC in order to get meaningful learning. For this, it is essential to elaborate a Teaching Module (MI. This teaching Module implies the treatment of concept errors by a teacher able to change the dynamic of the group in the classroom. This M.I. was used among sixth grade primary school and first grade secondary school in some state assisted schools in the North of Argentina (Tucumán and Jujuy. After evaluation, the results showed great and positive changes among the experimental groups taking into account the attitude and the academic results. Meaningful Learning was shown through pupilʼs creativity, expressions and also their ability of putting this into practice into everyday life.
Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)
Maulia, Eva; Miftahuddin; Sofyan, Hizir
2018-05-01
A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.
Development and characterisation of FPGA modems using forward error correction for FSOC
Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried
2016-05-01
In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.
An investigation of coupling of the internal kink mode to error field correction coils in tokamaks
International Nuclear Information System (INIS)
Lazarus, E.A.
2013-01-01
The coupling of the internal kink to an external m/n = 1/1 perturbation is studied for profiles that are known to result in a saturated internal kink in the limit of a cylindrical tokamak. It is found from three-dimensional equilibrium calculations that, for A ≈ 30 circular plasmas and A ≈ 3 elliptical shapes, this coupling of the boundary perturbation to the internal kink is strong; i.e., the amplitude of the m/n = 1/1 structure at q = 1 is large compared with the amplitude applied at the plasma boundary. Evidence suggests that this saturated internal kink, resulting from small field errors, is an explanation for the TEXTOR and JET measurements of q 0 remaining well below unity throughout the sawtooth cycle, as well as the distinction between sawtooth effects on the q-profile observed in TEXTOR and DIII-D. It is proposed that this excitation, which could readily be applied with error field correction coils, be explored as a mechanism for controlling sawtooth amplitudes in high-performance tokamak discharges. This result is then combined with other recent tokamak results to propose an L-mode approach to fusion in tokamaks. (paper)
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Energy Technology Data Exchange (ETDEWEB)
Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL
International Nuclear Information System (INIS)
Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin
2009-01-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.
A no-go theorem for a two-dimensional self-correcting quantum memory based on stabilizer codes
International Nuclear Information System (INIS)
Bravyi, Sergey; Terhal, Barbara
2009-01-01
We study properties of stabilizer codes that permit a local description on a regular D-dimensional lattice. Specifically, we assume that the stabilizer group of a code (the gauge group for subsystem codes) can be generated by local Pauli operators such that the support of any generator is bounded by a hypercube of size O(1). Our first result concerns the optimal scaling of the distance d with the linear size of the lattice L. We prove an upper bound d=O(L D-1 ) which is tight for D=1, 2. This bound applies to both subspace and subsystem stabilizer codes. Secondly, we analyze the suitability of stabilizer codes for building a self-correcting quantum memory. Any stabilizer code with geometrically local generators can be naturally transformed to a local Hamiltonian penalizing states that violate the stabilizer condition. A degenerate ground state of this Hamiltonian corresponds to the logical subspace of the code. We prove that for D=1, 2, different logical states can be mapped into each other by a sequence of single-qubit Pauli errors such that the energy of all intermediate states is upper bounded by a constant independent of the lattice size L. The same result holds if there are unused logical qubits that are treated as 'gauge qubits'. It demonstrates that a self-correcting quantum memory cannot be built using stabilizer codes in dimensions D=1, 2. This result is in sharp contrast with the existence of a classical self-correcting memory in the form of a two-dimensional (2D) ferromagnet. Our results leave open the possibility for a self-correcting quantum memory based on 2D subsystem codes or on 3D subspace or subsystem codes.
Tradeoff between energy and error in the discrimination of quantum-optical devices
Energy Technology Data Exchange (ETDEWEB)
Bisio, Alessandro; Dall' Arno, Michele; D' Ariano, Giacomo Mauro [Quit group, Dipartimento di Fisica ' ' A. Volta' ' , via Bassi 6, I-27100 Pavia (Italy) and Istituto Nazionale di Fisica Nucleare, Gruppo IV, via Bassi 6, I-27100 Pavia (Italy)
2011-07-15
We address the problem of energy-error tradeoff in the discrimination between two linear passive quantum optical devices with a single use. We provide an analytical derivation of the optimal strategy for beamsplitters and an iterative algorithm converging to the optimum in the general case. We then compare the optimal strategy with a simpler strategy using coherent input states and homodyne detection. It turns out that the former requires much less energy in order to achieve the same performances.
Tradeoff between energy and error in the discrimination of quantum-optical devices
International Nuclear Information System (INIS)
Bisio, Alessandro; Dall'Arno, Michele; D'Ariano, Giacomo Mauro
2011-01-01
We address the problem of energy-error tradeoff in the discrimination between two linear passive quantum optical devices with a single use. We provide an analytical derivation of the optimal strategy for beamsplitters and an iterative algorithm converging to the optimum in the general case. We then compare the optimal strategy with a simpler strategy using coherent input states and homodyne detection. It turns out that the former requires much less energy in order to achieve the same performances.
Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data
Directory of Open Access Journals (Sweden)
Jinhua Han
2017-01-01
Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.
International Nuclear Information System (INIS)
Herzog, Ulrike; Bergou, Janos A.
2004-01-01
We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is ambiguous discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i.e., error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum-error probability achievable in ambiguous discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case where the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison with the minimum failure probability
Initial Results of Using Daily CT Localization to Correct Portal Error in Prostate Cancer
International Nuclear Information System (INIS)
Lattanzi, Joseph; McNeely, Shawn; Barnes, Scott; Das, Indra; Schultheiss, Timothy E; Hanks, Gerald E.
1997-01-01
Purpose: To evaluate the use of daily CT simulation in prostate cancer to correct errors in portal placement and organ motion. Improved localization with this technique should allow the reduction of target margins and facilitate dose escalation in high risk patients while minimizing the risk of normal tissue morbidity. Methods and Materials : Five patients underwent standard CT simulation with the alpha cradle cast, IV contrast, and urethrogram. All were initially treated to 46 Gy in a four field conformal technique which included the prostate, seminal vesicles and pelvic lymph nodes (GTV 1 ). The prostate or prostate and seminal vesicles (GTV 2 ) then received 56 Gy with a 1.0 cm margin to the PTV. At 50 Gy a second CT simulation was performed with IV contrast, urethrogram and the alpha cradle secured to a rigid sliding board. The prostate was contoured, a new isocenter generated, and surface markers placed. Prostate only treatment portals for the final conedown (GTV 3 ) were created with 0.25 cm isodose margins to the PTV. The final six fractions in 2 patients with favorable disease and eight fractions in 3 patients with unfavorable disease were delivered using the daily CT technique. On each treatment day the patient was placed in his cast on the sliding board and a CT scan performed. The daily isocenter was calculated in the A/P and lateral dimension and compared to the 50 Gy CT simulation isocenter. Couch and surface marker shifts were calculated to produce perfect portal alignment. To maintain positioning, the patient was transferred to a gurney while on the sliding board in his cast, transported to the treatment room and then transferred to the treatment couch. The patient was then treated to the corrected isocenter. Portal films and real time images were obtained for each portal. Results: Utilizing CT-CT image registration (fusion) of the daily and 50 Gy baseline CT scans the isocenter changes were quantified to reflect the contribution of positional
Quantum Fourier Transform Over Galois Rings
Zhang, Yong
2009-01-01
Galois rings are regarded as "building blocks" of a finite commutative ring with identity. There have been many papers on classical error correction codes over Galois rings published. As an important warm-up before exploring quantum algorithms and quantum error correction codes over Galois rings, we study the quantum Fourier transform (QFT) over Galois rings and prove it can be efficiently preformed on a quantum computer. The properties of the QFT over Galois rings lead to the quantum algorit...
Six-Correction Logic (SCL Gates in Quantum-dot Cellular Automata (QCA
Directory of Open Access Journals (Sweden)
Md. Anisur Rahman
2015-11-01
Full Text Available Quantum Dot Cellular Automata (QCA is a promising nanotechnology in Quantum electronics for its ultra low power consumption, faster speed and small size features. It has significant advantages over the Complementary Metal–Oxide–Semiconductor (CMOS technology. This paper present, a novel QCA representation of Six-Correction Logic (SCL gate based on QCA logic gates: the Maj3, Maj AND gate and Maj OR. In order to design and verify the functionality of the proposed layout, QCADesigner a familiar QCA simulator has been employed. The simulation results confirm correctness of the claims and its usefulness in designing a digital circuits.
Optical correction of refractive error for preventing and treating eye symptoms in computer users.
Heus, Pauline; Verbeek, Jos H; Tikka, Christina
2018-04-10
Computer users frequently complain about problems with seeing and functioning of the eyes. Asthenopia is a term generally used to describe symptoms related to (prolonged) use of the eyes like ocular fatigue, headache, pain or aching around the eyes, and burning and itchiness of the eyelids. The prevalence of asthenopia during or after work on a computer ranges from 46.3% to 68.5%. Uncorrected or under-corrected refractive error can contribute to the development of asthenopia. A refractive error is an error in the focusing of light by the eye and can lead to reduced visual acuity. There are various possibilities for optical correction of refractive errors including eyeglasses, contact lenses and refractive surgery. To examine the evidence on the effectiveness, safety and applicability of optical correction of refractive error for reducing and preventing eye symptoms in computer users. We searched the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; Embase; Web of Science; and OSH update, all to 20 December 2017. Additionally, we searched trial registries and checked references of included studies. We included randomised controlled trials (RCTs) and quasi-randomised trials of interventions evaluating optical correction for computer workers with refractive error for preventing or treating asthenopia and their effect on health related quality of life. Two authors independently assessed study eligibility and risk of bias, and extracted data. Where appropriate, we combined studies in a meta-analysis. We included eight studies with 381 participants. Three were parallel group RCTs, three were cross-over RCTs and two were quasi-randomised cross-over trials. All studies evaluated eyeglasses, there were no studies that evaluated contact lenses or surgery. Seven studies evaluated computer glasses with at least one focal area for the distance of the computer screen with or without additional focal areas in presbyopic persons. Six studies compared computer
Quantum computers and quantum computations
International Nuclear Information System (INIS)
Valiev, Kamil' A
2005-01-01
This review outlines the principles of operation of quantum computers and their elements. The theory of ideal computers that do not interact with the environment and are immune to quantum decohering processes is presented. Decohering processes in quantum computers are investigated. The review considers methods for correcting quantum computing errors arising from the decoherence of the state of the quantum computer, as well as possible methods for the suppression of the decohering processes. A brief enumeration of proposed quantum computer realizations concludes the review. (reviews of topical problems)
Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo.
Krogel, Jaron T; Kent, P R C
2017-06-28
Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energy and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+ and 4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.
International Nuclear Information System (INIS)
Yamanashi, Yuki; Masubuchi, Kota; Yoshikawa, Nobuyuki
2016-01-01
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Energy Technology Data Exchange (ETDEWEB)
Yamanashi, Yuki, E-mail: yamanasi@ynu.ac.jp [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan); Masubuchi, Kota; Yoshikawa, Nobuyuki [Department of Electrical and Computer Engineering, Yokohama National University, Tokiwadai 79-5, Hodogaya-ku, Yokohama 240-8501 (Japan)
2016-11-15
The relationship between the timing margin and the error rate of the large-scale single flux quantum logic circuits is quantitatively investigated to establish a timing design guideline. We observed that the fluctuation in the set-up/hold time of single flux quantum logic gates caused by thermal noises is the most probable origin of the logical error of the large-scale single flux quantum circuit. The appropriate timing margin for stable operation of the large-scale logic circuit is discussed by taking the fluctuation of setup/hold time and the timing jitter in the single flux quantum circuits. As a case study, the dependence of the error rate of the 1-million-bit single flux quantum shift register on the timing margin is statistically analyzed. The result indicates that adjustment of timing margin and the bias voltage is important for stable operation of a large-scale SFQ logic circuit.
Quantum measurement corrections to CIDNP in photosynthetic reaction centers
International Nuclear Information System (INIS)
Kominis, Iannis K
2013-01-01
Chemically induced dynamic nuclear polarization is a signature of spin order appearing in many photosynthetic reaction centers. Such polarization, significantly enhanced above thermal equilibrium, is known to result from the nuclear spin sorting inherent in the radical pair mechanism underlying long-lived charge-separated states in photosynthetic reaction centers. We will show here that the recently understood fundamental quantum dynamics of radical-ion-pair reactions open up a new and completely unexpected pathway toward obtaining chemically induced dynamic nuclear polarization signals. The fundamental decoherence mechanism inherent in the recombination process of radical pairs is shown to produce nuclear spin polarizations of the order of 10 4 times (or more) higher than the thermal equilibrium value at the Earth's magnetic field relevant to natural photosynthesis. This opens up the possibility of a fundamentally new exploration of the biological significance of high nuclear polarizations in photosynthesis. (paper)
Meson exchange current corrections to magnetic moments in quantum hadro-dynamics
Energy Technology Data Exchange (ETDEWEB)
Morse, T M; Price, C E; Shepard, J R [Colorado Univ., Boulder (USA). Dept. of Physics
1990-11-15
We have calculated pion exchange current corrections to the magnetic moments of closed shell {plus minus}1 particle nuclei near A=16 and 40 within the framework of quantum hadro-dynamics (QHD). We find that the correction is significant and that, in general, the agreement of the QHD isovector moments with experiment is worsened. Comparisons to previous non-relativistic calculations are also made. (orig.).
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub
Intelligent error correction method applied on an active pixel sensor based star tracker
Schmidt, Uwe
2005-10-01
Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics
International Nuclear Information System (INIS)
Moss, A.R.L.
2000-01-01
Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)
Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics
Energy Technology Data Exchange (ETDEWEB)
Moss, A.R.L
2000-07-01
Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of {+-} 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of {+-}0.1%T is required to give a UPF uncertainty of {+-}2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)
Quantum corrections to Bekenstein–Hawking black hole entropy and gravity partition functions
International Nuclear Information System (INIS)
Bytsenko, A.A.; Tureanu, A.
2013-01-01
Algebraic aspects of the computation of partition functions for quantum gravity and black holes in AdS 3 are discussed. We compute the sub-leading quantum corrections to the Bekenstein–Hawking entropy. It is shown that the quantum corrections to the classical result can be included systematically by making use of the comparison with conformal field theory partition functions, via the AdS 3 /CFT 2 correspondence. This leads to a better understanding of the role of modular and spectral functions, from the point of view of the representation theory of infinite-dimensional Lie algebras. Besides, the sum of known quantum contributions to the partition function can be presented in a closed form, involving the Patterson–Selberg spectral function. These contributions can be reproduced in a holomorphically factorized theory whose partition functions are associated with the formal characters of the Virasoro modules. We propose a spectral function formulation for quantum corrections to the elliptic genus from supergravity states
Directory of Open Access Journals (Sweden)
Dan Tulpan
2013-01-01
Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
DEFF Research Database (Denmark)
Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane
2010-01-01
A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper......, we point out a fatal error of overflow contained in OLFCPIR and show how the error can be corrected. Finally, we compare with LFCPIR to show that the communication cost of our corrected OLFCPIR is asymptotically the same as the previous LFCPIR....
DEFF Research Database (Denmark)
Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy
2004-01-01
An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....
Improving Performance in Quantum Mechanics with Explicit Incentives to Correct Mistakes
Brown, Benjamin R.; Mason, Andrew; Singh, Chandralekha
2016-01-01
An earlier investigation found that the performance of advanced students in a quantum mechanics course did not automatically improve from midterm to final exam on identical problems even when they were provided the correct solutions and their own graded exams. Here, we describe a study, which extended over four years, in which upper-level…
Quantum Scalar Corrections to the Gravitational Potentials on de Sitter Background
Park, Sohyun; Prokopec, Tomislav; Woodard, R. P.
We employ the graviton self-energy induced by a massless, minimally coupled (MMC) scalar on de Sitter background to compute the quantum corrections to the gravitational potentials of a static point particle with a mass $M$. The Schwinger-Keldysh formalism is used to derive real and causal effective
Wormholes in higher dimensions with non-linear curvature terms from quantum gravity corrections
Energy Technology Data Exchange (ETDEWEB)
El-Nabulsi, Ahmad Rami [Neijiang Normal University, Neijiang, Sichuan (China)
2011-11-15
In this work, we discuss a 7-dimensional universe in the presence of a static traversable wormhole and a decaying cosmological constant and dominated by higher-order curvature effects expected from quantum gravity corrections. We confirmed the existence of wormhole solutions in the form of the Lovelock gravity. Many interesting and attractive features are discussed in some detail.
Diffusion in the kicked quantum rotator by random corrections to a linear and sine field
International Nuclear Information System (INIS)
Hilke, M.; Flores, J.C.
1992-01-01
We discuss the diffusion in momentum space, of the kicked quantum rotator, by introducing random corrections to a linear and sine external field. For the linear field we obtain a linear diffusion behavior identical to the case with zero average in the external field. But for the sine field, accelerator modes with quadratic diffusion are found for particular values of the kicking period. (orig.)
International Nuclear Information System (INIS)
Boyanovsky, D.; Vega, H.J. de; Sanchez, N.G.
2005-01-01
We obtain the effective inflaton potential during slow-roll inflation by including the one-loop quantum corrections to the energy momentum tensor from scalar curvature and tensor perturbations as well as from light scalars and Dirac fermions coupled to the inflaton. During slow-roll inflation there is an unambiguous separation between super- and subhorizon contributions to the energy momentum tensor. The superhorizon part is determined by the curvature perturbations and scalar field fluctuations: both feature infrared enhancements as the inverse of a combination of slow-roll parameters which measure the departure from scale invariance in each case. Fermions and gravitons do not exhibit infrared divergences. The subhorizon part is completely specified by the trace anomaly of the fields with different spins and is solely determined by the space-time geometry. The one-loop corrections to the amplitude of curvature and tensor perturbations are obtained to leading order in slow roll and in the (H/M Pl ) 2 expansion. A complete assessment of the backreaction problem up to one loop including bosons and fermions is provided. The result validates the effective field theory description of inflation and confirms the robustness of the inflationary paradigm to quantum fluctuations. Quantum corrections to the power spectra are expressed in terms of the CMB observables: n s , r and dn s /dlnk. Trace anomalies (especially the graviton part) dominate these quantum corrections in a definite direction: they enhance the scalar curvature fluctuations and reduce the tensor fluctuations
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.
2017-11-27
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
Directory of Open Access Journals (Sweden)
Trofimov Ivan D.
2017-01-01
Full Text Available The paper re-examines the “stylized facts” of the balanced growth in developed economies, looking specifically at capital productivity variable. The economic data is obtained from European Commission AMECO database, spanning 1961-2014 period. For a sample of 22 OECD economies, the paper applies univariate LM unit root tests with one or two structural breaks, and estimates error-correction and linear trend models with breaks. It is shown that diverse statistical patterns were present across economies and overall mixed evidence is provided as to the stability of capital productivity and balanced growth in general. Specifically, both upward and downward trends in capital productivity were present, while in several economies mean reversion and random walk patterns were observed. The data and results were largely in line with major theoretical explanations pertaining to capital productivity. With regard to determinants of the capital productivity movements, the structure of capital stock and the prices of capital goods were likely most salient.
Oil price fluctuations and employment in Kern County: A Vector Error Correction approach
International Nuclear Information System (INIS)
Michieka, Nyakundi M.; Gearhart, Richard
2015-01-01
Kern County is one of the country's largest oil producing regions, in which the oil industry employs a significant fraction of the labor force in the county. In this study, the short- and long-run effects of oil price fluctuations on employment in Kern County are investigated using a Vector Error Correction model (VECM). Empirical results over the period 1990:01 to 2015:03 suggest long-run causality running from both WTI and Brent oil prices to employment. No causality is detected in the short-run. Kern County should formulate appropriate policies, which take into account the fact that changes in oil prices have long-term effects on employment rather than short term. - Highlights: • Kern County is California's largest oil producing region. • Historical data has shown increased employment during periods of high oil prices. • We study the short- and long run effects of oil prices on employment in Kern County. • Results suggest long run causality running from WTI and Brent to employment. • No causality is detected in the short run.
Assessment of cassava supply response in Nigeria using vector error correction model (VECM
Directory of Open Access Journals (Sweden)
Obayelu Oluwakemi Adeola
2016-12-01
Full Text Available The response of agricultural commodities to changes in price is an important factor in the success of any reform programme in agricultural sector of Nigeria. The producers of traditional agricultural commodities, such as cassava, face the world market directly. Consequently, the producer price of cassava has become unstable, which is a disincentive for both its production and trade. This study investigated cassava supply response to changes in price. Data collected from FAOSTAT from 1966 to 2010 were analysed using Vector Error Correction Model (VECM approach. The results of the VECM for the estimation of short run adjustment of the variables toward their long run relationship showed a linear deterministic trend in the data and that Area cultivated and own prices jointly explained 74% and 63% of the variation in the Nigeria cassava output in the short run and long-run respectively. Cassava prices (P<0.001 and land cultivated (P<0.1 had positive influence on cassava supply in the short-run. The short-run price elasticity was 0.38 indicating that price policies were effective in the short-run promotion of cassava production in Nigeria. However, in the long-run elasticity cassava was not responsive to price incentives significantly. This suggests that price policies are not effective in the long-run promotion of cassava production in the country owing to instability in governance and government policies.
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.; Pan, B.; Lubineau, Gilles
2017-01-01
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
Topological order and memory time in marginally-self-correcting quantum memory
Siva, Karthik; Yoshida, Beni
2017-03-01
We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.