Error detection in spoken human-machine interaction
Krahmer, E.J.; Swerts, M.G.J.; Theune, M.; Weegels, M.F.
2001-01-01
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
Error detection in spoken human-machine interaction
Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.
Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,
"Context and Spoken Word Recognition in a Novel Lexicon": Correction
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.
2009-01-01
Reports an error in "Context and spoken word recognition in a novel lexicon" by Kathleen Pirog Revill, Michael K. Tanenhaus and Richard N. Aslin ("Journal of Experimental Psychology: Learning, Memory, and Cognition," 2008[Sep], Vol 34[5], 1207-1223). Figure 9 was inadvertently duplicated as Figure 10. Figure 9 in the original article was correct.…
Indian Academy of Sciences (India)
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Correction of refractive errors
Directory of Open Access Journals (Sweden)
Vladimir Pfeifer
2005-10-01
Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.
Thermodynamics of Error Correction
Directory of Open Access Journals (Sweden)
Pablo Sartori
2015-12-01
Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Error Correcting Codes -34 ...
Indian Academy of Sciences (India)
information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.
Indian Academy of Sciences (India)
successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
Error correcting coding for OTN
DEFF Research Database (Denmark)
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....
Quantum error correction for beginners
International Nuclear Information System (INIS)
Devitt, Simon J; Nemoto, Kae; Munro, William J
2013-01-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
Correcting quantum errors with entanglement.
Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-10-20
We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.
DEFF Research Database (Denmark)
Martinez Peñas, Umberto; Pellikaan, Ruud
2017-01-01
Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Corrective Feedback, Spoken Accuracy and Fluency, and the Trade-Off Hypothesis
Chehr Azad, Mohammad Hassan; Farrokhi, Farahman; Zohrabi, Mohammad
2018-01-01
The current study was an attempt to investigate the effects of different corrective feedback (CF) conditions on Iranian EFL learners' spoken accuracy and fluency (AF) and the trade-off between them. Consequently, four pre-intermediate intact classes were randomly selected as the control, delayed explicit metalinguistic CF, extensive recast, and…
Error forecasting schemes of error correction at receiver
International Nuclear Information System (INIS)
Bhunia, C.T.
2007-08-01
To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)
Iterative optimization of quantum error correcting codes
International Nuclear Information System (INIS)
Reimpell, M.; Werner, R.F.
2005-01-01
We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step
Opportunistic Error Correction for WLAN Applications
Shao, X.; Schiphorst, Roelof; Slump, Cornelis H.
2008-01-01
The current error correction layer of IEEE 802.11a WLAN is designed for worst case scenarios, which often do not apply. In this paper, we propose a new opportunistic error correction layer based on Fountain codes and a resolution adaptive ADC. The key part in the new proposed system is that only
Jokinen, Kristiina
2009-01-01
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides
Robot learning and error correction
Friedman, L.
1977-01-01
A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.
Volterra Filtering for ADC Error Correction
Directory of Open Access Journals (Sweden)
J. Saliga
2001-09-01
Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Open quantum systems and error correction
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Statistical mechanics of error-correcting codes
Kabashima, Y.; Saad, D.
1999-01-01
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
ecco: An error correcting comparator theory.
Ghirlanda, Stefano
2018-03-08
Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.
Black Holes, Holography, and Quantum Error Correction
CERN. Geneva
2017-01-01
How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions? How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator? Why do such things happen only in gravitational theories? In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence. No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.
Triple-Error-Correcting Codec ASIC
Jones, Robert E.; Segallis, Greg P.; Boyd, Robert
1994-01-01
Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.
Tensor Networks and Quantum Error Correction
Ferris, Andrew J.; Poulin, David
2014-07-01
We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.
Experimental quantum error correction with high fidelity
International Nuclear Information System (INIS)
Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond
2011-01-01
More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ε to ∼ε 2 . In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.
Spoken Word Recognition Errors in Speech Audiometry: A Measure of Hearing Performance?
Directory of Open Access Journals (Sweden)
Martine Coene
2015-01-01
Full Text Available This report provides a detailed analysis of incorrect responses from an open-set spoken word-repetition task which is part of a Dutch speech audiometric test battery. Single-consonant confusions were analyzed from 230 normal hearing participants in terms of the probability of choice of a particular response on the basis of acoustic-phonetic, lexical, and frequency variables. The results indicate that consonant confusions are better predicted by lexical knowledge than by acoustic properties of the stimulus word. A detailed analysis of the transmission of phonetic features indicates that “voicing” is best preserved whereas “manner of articulation” yields most perception errors. As consonant confusion matrices are often used to determine the degree and type of a patient’s hearing impairment, to predict a patient’s gain in hearing performance with hearing devices and to optimize the device settings in view of maximum output, the observed findings are highly relevant for the audiological practice. Based on our findings, speech audiometric outcomes provide a combined auditory-linguistic profile of the patient. The use of confusion matrices might therefore not be the method best suited to measure hearing performance. Ideally, they should be complemented by other listening task types that are known to have less linguistic bias, such as phonemic discrimination.
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
New decoding methods of interleaved burst error-correcting codes
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Error-correction coding for digital communications
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Polynomial theory of error correcting codes
Cancellieri, Giovanni
2015-01-01
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Joint Schemes for Physical Layer Security and Error Correction
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
Unitary Application of the Quantum Error Correction Codes
International Nuclear Information System (INIS)
You Bo; Xu Ke; Wu Xiaohua
2012-01-01
For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.
Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation
International Nuclear Information System (INIS)
Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.
2003-01-01
The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Energy efficiency of error correction on wireless systems
Havinga, Paul J.M.
1999-01-01
Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.
Quantum error correction with spins in diamond
Cramer, J.
2016-01-01
Digital information based on the laws of quantum mechanics promisses powerful new ways of computation and communication. However, quantum information is very fragile; inevitable errors continuously build up and eventually all information is lost. Therefore, realistic large-scale quantum information
Operator quantum error-correcting subsystems for self-correcting quantum memories
International Nuclear Information System (INIS)
Bacon, Dave
2006-01-01
The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures
Error correcting circuit design with carbon nanotube field effect transistors
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
VLSI architectures for modern error-correcting codes
Zhang, Xinmiao
2015-01-01
Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI
Errors and Correction of Precipitation Measurements in China
Institute of Scientific and Technical Information of China (English)
REN Zhihua; LI Mingqin
2007-01-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
Continuous quantum error correction for non-Markovian decoherence
International Nuclear Information System (INIS)
Oreshkov, Ognyan; Brun, Todd A.
2007-01-01
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics
Error Correction for Non-Abelian Topological Quantum Computation
Directory of Open Access Journals (Sweden)
James R. Wootton
2014-03-01
Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.
Correcting a Persistent Manhattan Project Statistical Error
Reed, Cameron
2011-04-01
In his 1987 autobiography, Major-General Kenneth Nichols, who served as the Manhattan Project's ``District Engineer'' under General Leslie Groves, related that when the Clinton Engineer Works at Oak Ridge, TN, was completed it was consuming nearly one-seventh (~ 14%) of the electric power being generated in the United States. This statement has been reiterated in several editions of a Department of Energy publication on the Manhattan Project. This remarkable claim has been checked against power generation and consumption figures available in Manhattan Engineer District documents, Tennessee Valley Authority records, and historical editions of the Statistical Abstract of the United States. The correct figure is closer to 0.9% of national generation. A speculation will be made as to the origin of Nichols' erroneous one-seventh figure.
Analysis of error-correction constraints in an optical disk
Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David
1996-07-01
The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.
Quantum error-correcting code for ternary logic
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.
Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian
2016-04-01
While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.
Repeat-aware modeling and correction of short read errors.
Yang, Xiao; Aluru, Srinivas; Dorman, Karin S
2011-02-15
High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors
Entanglement renormalization, quantum error correction, and bulk causality
Energy Technology Data Exchange (ETDEWEB)
Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2017-04-07
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
Directory of Open Access Journals (Sweden)
Chitra Jayathilake
2013-01-01
Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.
Autonomous Quantum Error Correction with Application to Quantum Metrology
Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.
2017-04-01
We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
On the Design of Error-Correcting Ciphers
Directory of Open Access Journals (Sweden)
Mathur Chetan Nanjunda
2006-01-01
Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison
An investigation of error correcting techniques for OMV and AXAF
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Error correction and degeneracy in surface codes suffering loss
International Nuclear Information System (INIS)
Stace, Thomas M.; Barrett, Sean D.
2010-01-01
Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.
Software for Correcting the Dynamic Error of Force Transducers
Directory of Open Access Journals (Sweden)
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
Correcting for particle counting bias error in turbulent flow
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
NP-hardness of decoding quantum error-correction codes
International Nuclear Information System (INIS)
Hsieh, Min-Hsiu; Le Gall, Francois
2011-01-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Energy Technology Data Exchange (ETDEWEB)
Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.
Quantum Error Correction and Fault Tolerant Quantum Computing
Gaitan, Frank
2008-01-01
It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo
Scalable error correction in distributed ion trap computers
International Nuclear Information System (INIS)
Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.
2006-01-01
A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment
Quantum Information Processing and Quantum Error Correction An Engineering Approach
Djordjevic, Ivan
2012-01-01
Quantum Information Processing and Quantum Error Correction is a self-contained, tutorial-based introduction to quantum information, quantum computation, and quantum error-correction. Assuming no knowledge of quantum mechanics and written at an intuitive level suitable for the engineer, the book gives all the essential principles needed to design and implement quantum electronic and photonic circuits. Numerous examples from a wide area of application are given to show how the principles can be implemented in practice. This book is ideal for the electronics, photonics and computer engineer
Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics
International Nuclear Information System (INIS)
Sarovar, Mohan; Young, Kevin C
2013-01-01
While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)
Passive quantum error correction of linear optics networks through error averaging
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Method for decoupling error correction from privacy amplification
Energy Technology Data Exchange (ETDEWEB)
Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)
2003-04-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.
Method for decoupling error correction from privacy amplification
International Nuclear Information System (INIS)
Lo, Hoi-Kwong
2003-01-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof
Energy Efficient Error-Correcting Coding for Wireless Systems
Shao, X.
2010-01-01
The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required
Quantum algorithms and quantum maps - implementation and error correction
International Nuclear Information System (INIS)
Alber, G.; Shepelyansky, D.
2005-01-01
Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)
Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2013-01-01
In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...
Direct cointegration testing in error-correction models
F.R. Kleibergen (Frank); H.K. van Dijk (Herman)
1994-01-01
textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The
Controlling qubit drift by recycling error correction syndromes
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction
Directory of Open Access Journals (Sweden)
Chi-Huang Shih
2012-01-01
Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.
Entanglement and Quantum Error Correction with Superconducting Qubits
Reed, Matthew
2015-03-01
Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.
Neural network error correction for solving coupled ordinary differential equations
Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.
1992-01-01
A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.
Atmospheric Error Correction of the Laser Beam Ranging
Directory of Open Access Journals (Sweden)
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
Neural network decoder for quantum error correcting codes
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
Testing and inference in nonlinear cointegrating vector error correction models
DEFF Research Database (Denmark)
Kristensen, D.; Rahbek, A.
2013-01-01
We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...
Error-finding and error-correcting methods for the start-up of the SLC
International Nuclear Information System (INIS)
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.
1987-02-01
During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper
Error-correcting pairs for a public-key cryptosystem
International Nuclear Information System (INIS)
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-01-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t -bounded decoding algorithms which is achieved in the case the code has a t -error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t -ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t -error correcting pair. (paper)
Forecasting the price of gold: An error correction approach
Directory of Open Access Journals (Sweden)
Kausik Gangopadhyay
2016-03-01
Full Text Available Gold prices in the Indian market may be influenced by a multitude of factors such as the value of gold in investment decisions, as an inflation hedge, and in consumption motives. We develop a model to explain and forecast gold prices in India, using a vector error correction model. We identify investment decision and inflation hedge as prime movers of the data. We also present out-of-sample forecasts of our model and the related properties.
Design of nanophotonic circuits for autonomous subsystem quantum error correction
Energy Technology Data Exchange (ETDEWEB)
Kerckhoff, J; Pavlichin, D S; Chalabi, H; Mabuchi, H, E-mail: jkerc@stanford.edu [Edward L Ginzton Laboratory, Stanford University, Stanford, CA 94305 (United States)
2011-05-15
We reapply our approach to designing nanophotonic quantum memories in order to formulate an optical network that autonomously protects a single logical qubit against arbitrary single-qubit errors. Emulating the nine-qubit Bacon-Shor subsystem code, the network replaces the traditionally discrete syndrome measurement and correction steps by continuous, time-independent optical interactions and coherent feedback of unitarily processed optical fields.
Equation-Method for correcting clipping errors in OFDM signals.
Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry
2016-01-01
Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Coordinated joint motion control system with position error correction
Danko, George L.
2016-04-05
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Measurement Error Correction for Predicted Spatiotemporal Air Pollution Exposures.
Keller, Joshua P; Chang, Howard H; Strickland, Matthew J; Szpiro, Adam A
2017-05-01
Air pollution cohort studies are frequently analyzed in two stages, first modeling exposure then using predicted exposures to estimate health effects in a second regression model. The difference between predicted and unobserved true exposures introduces a form of measurement error in the second stage health model. Recent methods for spatial data correct for measurement error with a bootstrap and by requiring the study design ensure spatial compatibility, that is, monitor and subject locations are drawn from the same spatial distribution. These methods have not previously been applied to spatiotemporal exposure data. We analyzed the association between fine particulate matter (PM2.5) and birth weight in the US state of Georgia using records with estimated date of conception during 2002-2005 (n = 403,881). We predicted trimester-specific PM2.5 exposure using a complex spatiotemporal exposure model. To improve spatial compatibility, we restricted to mothers residing in counties with a PM2.5 monitor (n = 180,440). We accounted for additional measurement error via a nonparametric bootstrap. Third trimester PM2.5 exposure was associated with lower birth weight in the uncorrected (-2.4 g per 1 μg/m difference in exposure; 95% confidence interval [CI]: -3.9, -0.8) and bootstrap-corrected (-2.5 g, 95% CI: -4.2, -0.8) analyses. Results for the unrestricted analysis were attenuated (-0.66 g, 95% CI: -1.7, 0.35). This study presents a novel application of measurement error correction for spatiotemporal air pollution exposures. Our results demonstrate the importance of spatial compatibility between monitor and subject locations and provide evidence of the association between air pollution exposure and birth weight.
Topological quantum error correction in the Kitaev honeycomb model
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
[Incidence of refractive errors with corrective aids subsequent selection].
Benes, P; Synek, S; Petrová, S; Sokolová, Sidlová J; Forýtková, L; Holoubková, Z
2012-02-01
This study follows the occurrence of refractive errors in population and the possible selection of the appropriate type of corrective aids. Objective measurement and subsequent determination of the subjective refraction of the eye is on essential act in opotmetric practice. The file represented by 615 patients (1230 eyes) is divided according to the refractive error of myopia, hyperopia and as a control group are listed emetropic clients. The results of objective and subjective values of refraction are compared and statistically processed. The study included 615 respondents. To determine the objective refraction the autorefraktokeratometer with Placido disc was used and the values of spherical and astigmatic correction components, including the axis were recorded. These measurements were subsequently verified and tested subjectively using the trial lenses and the projection optotype to the normal investigative distance of 5 meters. After this the appropriate corrective aids were then recommended. Group I consists of 123 men and 195 women with myopia (n = 635) of clients with an average age 39 +/- 18,9 years. Objective refraction - sphere: -2,57 +/- 2,46 D, cylinder: -1,1 +/- 1,01 D, axis of: 100 degrees +/- 53,16 degrees. Subjective results are as follows--the value of sphere: -2,28 +/- 2,33 D, cylinder -0,63 +/- 0,80 D, axis of: 99,8 degrees +/- 56,64 degrees. Group II is represented hyperopic clients and consists of 67 men and 107 women (n = 348). The average age is 58,84 +/- 16,73 years. Objective refraction has values - sphere: +2,81 +/- 2,21 D, cylinder: -1,0 +/- 0,94 D; axis 95 degree +/- 45,4 degrees. Subsequent determination of subjective refraction has the following results - sphere: +2,28 +/- 2,06 D; cylinder: -0,49 +/- 0,85 D, axis of: 95,9 degrees +/- 46,4 degrees. Group III consists from emetropes whose final minimum viasual acuity was Vmin = 1,0 (5/5) or better. Overall, this control group is represented 52 males and 71 females (n = 247). The average
Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints
Directory of Open Access Journals (Sweden)
Fanqi Meng
2017-12-01
Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.
Distance error correction for time-of-flight cameras
Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian
2017-06-01
The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
A new controller for the JET error field correction coils
International Nuclear Information System (INIS)
Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.
2005-01-01
This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios
Cause of depth error of borehole logging and its correction
International Nuclear Information System (INIS)
Iida, Yoshimasa; Ikeda, Koki; Tsuruta, Tadahiko; Ito, Hiroaki; Goto, Junichi.
1996-01-01
Data by borehole logging can be used for detailed analysis of geological structures. Depths measured by portable borehole loggers commonly shift a few meters on the level of 400 to 500 meters deep. Therefore, the cause of depth error has to be recognized to make proper corrections for detailed structural analysis. Correlation between depths of drill core and in-rod radiometric logging has been performed in detail on exploration drill holes in the Athabasca basin, Canada. As a result, a common tendency of logging depth shift has been recognized, and an empirical formula (quadratic equation) for this has been obtained. The physical meaning of the formula and the cause of the depth error has been considered. (author)
Random access to mobile networks with advanced error correction
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate
Polio, Charlene
2012-01-01
The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…
Optimal quantum error correcting codes from absolutely maximally entangled states
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Quantum secret sharing based on quantum error-correcting codes
International Nuclear Information System (INIS)
Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu
2011-01-01
Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM
Directory of Open Access Journals (Sweden)
David Kaluge
2017-03-01
Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.
ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS
Directory of Open Access Journals (Sweden)
K. Jacobsen
2016-06-01
Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS
Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage
Directory of Open Access Journals (Sweden)
Juha Partala
2017-01-01
Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.
Topics in quantum cryptography, quantum error correction, and channel simulation
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
Error field and its correction strategy in tokamaks
International Nuclear Information System (INIS)
In, Yongkyoon
2014-01-01
While error field correction (EFC) is to minimize the unwanted kink-resonant non-axisymmetric components, resonant magnetic perturbation (RMP) application is to maximize the benefits of pitch-resonant non-axisymmetric components. As the plasma response against non-axisymmetric field increases with beta increase, feedback-controlled EFC is a more promising EFC strategy in reactor-relevant high-beta regimes. Nonetheless, various physical aspects and uncertainties associated with EFC should be taken into account and clarified in the terms of multiple low-n EFC and multiple MHD modes, in addition to the compatibility issue with RMP application. Such a multi-faceted view of EFC strategy is briefly discussed. (author)
Editing disulphide bonds: error correction using redox currencies.
Ito, Koreaki
2010-01-01
The disulphide bond-introducing enzyme of bacteria, DsbA, sometimes oxidizes non-native cysteine pairs. DsbC should rearrange the resulting incorrect disulphide bonds into those with correct connectivity. DsbA and DsbC receive oxidizing and reducing equivalents, respectively, from respective redox components (quinones and NADPH) of the cell. Two mechanisms of disulphide bond rearrangement have been proposed. In the redox-neutral 'shuffling' mechanism, the nucleophilic cysteine in the DsbC active site forms a mixed disulphide with a substrate and induces disulphide shuffling within the substrate part of the enzyme-substrate complex, followed by resolution into a reduced enzyme and a disulphide-rearranged substrate. In the 'reduction-oxidation' mechanism, DsbC reduces those substrates with wrong disulphides so that DsbA can oxidize them again. In this issue of Molecular Microbiology, Berkmen and his collaborators show that a disulphide reductase, TrxP, from an anaerobic bacterium can substitute for DsbC in Escherichia coli. They propose that the reduction-oxidation mechanism of disulphide rearrangement can indeed operate in vivo. An implication of this work is that correcting errors in disulphide bonds can be coupled to cellular metabolism and is conceptually similar to the proofreading processes observed with numerous synthesis and maturation reactions of biological macromolecules.
THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING
Directory of Open Access Journals (Sweden)
Ketut Santi Indriani
2015-05-01
Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
Directory of Open Access Journals (Sweden)
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Systematic Error of Acoustic Particle Image Velocimetry and Its Correction
Directory of Open Access Journals (Sweden)
Mickiewicz Witold
2014-08-01
Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
Energy efficiency of error correcting mechanisms for wireless communications
Havinga, Paul J.M.
We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal
Performance Errors in Weight Training and Their Correction.
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
First order error corrections in common introductory physics experiments
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Correction of Cadastral Error: Either the Right or Obligation of the Person Concerned?
Directory of Open Access Journals (Sweden)
Magdenko A. Y.
2014-07-01
Full Text Available The article is devoted to the institute of cadastral error. Some questions and problems of cadastral error corrections are considered. The material is based on current legislation and judicial practice.
Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE
Directory of Open Access Journals (Sweden)
Patrick SAINT-DIZIER
2015-12-01
Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.
Reed-Solomon error-correction as a software patch mechanism.
Energy Technology Data Exchange (ETDEWEB)
Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2013-11-01
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Directory of Open Access Journals (Sweden)
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
Extending Lifetime of Wireless Sensor Networks using Forward Error Correction
DEFF Research Database (Denmark)
Donapudi, S U; Obel, C O; Madsen, Jan
2006-01-01
Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...
Detecting and correcting partial errors: Evidence for efficient control without conscious access.
Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B
2014-09-01
Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Transfer Error and Correction Approach in Mobile Network
Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou
With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
Opportunistic error correction for mimo-ofdm: from theory to practice
Shao, X.; Slump, Cornelis H.
Opportunistic error correction based on fountain codes is especially designed for the MIMOOFDM system. The key point of this new method is the tradeoff between the code rate of error correcting codes and the number of sub-carriers in the channel vector to be discarded. By transmitting one
An upper bound on the number of errors corrected by a convolutional code
DEFF Research Database (Denmark)
Justesen, Jørn
2000-01-01
The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Quantum mean-field decoding algorithm for error-correcting codes
International Nuclear Information System (INIS)
Inoue, Jun-ichi; Saika, Yohei; Okada, Masato
2009-01-01
We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).
Pencil kernel correction and residual error estimation for quality-index-based dose calculations
International Nuclear Information System (INIS)
Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael
2006-01-01
Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method
Upper bounds on the number of errors corrected by a convolutional code
DEFF Research Database (Denmark)
Justesen, Jørn
2004-01-01
We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Spatially coupled low-density parity-check error correction for holographic data storage
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
New laws of practice for learning and error correction
International Nuclear Information System (INIS)
Duffey, R.B.
2008-01-01
Relevant to design, operation and safety is the determination of risk and error rates. We provide the detailed comparison of our new learning and statistical theories for system outcome data with the traditional analysis of the learning curves obtained from tests with individual human subjects. The results provide a consistent predictive basis for the learning trends emerging all the way from timescales of many years in large technological system outcomes to actions that occur in about a tenth of a second for individual human decisions. Hence, we demonstrate both the common influence of the human element and the importance of statistical reasoning and analysis. (author)
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
Remote one-qubit information concentration and decoding of operator quantum error-correction codes
International Nuclear Information System (INIS)
Hsu Liyi
2007-01-01
We propose the general scheme of remote one-qubit information concentration. To achieve the task, the Bell-correlated mixed states are exploited. In addition, the nonremote one-qubit information concentration is equivalent to the decoding of the quantum error-correction code. Here we propose how to decode the stabilizer codes. In particular, the proposed scheme can be used for the operator quantum error-correction codes. The encoded state can be recreated on the errorless qubit, regardless how many bit-flip errors and phase-flip errors have occurred
DEFF Research Database (Denmark)
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo
2016-01-01
radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...
Scanner qualification with IntenCD based reticle error correction
Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan
2010-03-01
Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.
Method and apparatus for optical phase error correction
DeRose, Christopher; Bender, Daniel A.
2014-09-02
The phase value of a phase-sensitive optical device, which includes an optical transport region, is modified by laser processing. At least a portion of the optical transport region is exposed to a laser beam such that the phase value is changed from a first phase value to a second phase value, where the second phase value is different from the first phase value. The portion of the optical transport region that is exposed to the laser beam can be a surface of the optical transport region or a portion of the volume of the optical transport region. In an embodiment of the invention, the phase value of the optical device is corrected by laser processing. At least a portion of the optical transport region is exposed to a laser beam until the phase value of the optical device is within a specified tolerance of a target phase value.
Backtracking dynamics of RNA polymerase: pausing and error correction
Sahoo, Mamata; Klumpp, Stefan
2013-09-01
Transcription by RNA polymerases is frequently interrupted by pauses. One mechanism of such pauses is backtracking, where the RNA polymerase translocates backward with respect to both the DNA template and the RNA transcript, without shortening the transcript. Backtracked RNA polymerases move in a diffusive fashion and can return to active transcription either by diffusive return to the position where backtracking was initiated or by cleaving the transcript. The latter process also provides a mechanism for proofreading. Here we present some exact results for a kinetic model of backtracking and analyse its impact on the speed and the accuracy of transcription. We show that proofreading through backtracking is different from the classical (Hopfield-Ninio) scheme of kinetic proofreading. Our analysis also suggests that, in addition to contributing to the accuracy of transcription, backtracking may have a second effect: it attenuates the slow down of transcription that arises as a side effect of discriminating between correct and incorrect nucleotides based on the stepping rates.
High-speed parallel forward error correction for optical transport networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2010-01-01
This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....
Highly accurate fluorogenic DNA sequencing with information theory-based error correction.
Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi
2017-12-01
Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.
Machine-learning-assisted correction of correlated qubit errors in a topological code
Directory of Open Access Journals (Sweden)
Paul Baireuther
2018-01-01
Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.
Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring
Energy Technology Data Exchange (ETDEWEB)
Bunch, S.C.; Holmes, J.
2004-01-01
We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.
Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol
International Nuclear Information System (INIS)
Horoshko, D B
2007-01-01
The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)
Investigation of Ionospheric Spatial Gradients for Gagan Error Correction
Chandra, K. Ravi
In India, Indian Space Research Organization (ISRO) has established with an objective to develop space technology and its application to various national tasks. The national tasks include, establishment of major space systems such as Indian National Satellites (INSAT) for communication, television broadcasting and meteorological services, Indian Remote Sensing Satellites (IRS), etc. Apart from these, to cater to the needs of civil aviation applications, GPS Aided Geo Augmented Navigation (GAGAN) system is being jointly implemented along with Airports Authority of India (AAI) over the Indian region. The most predominant parameter affecting the navigation accuracy of GAGAN is ionospheric delay which is a function of total number of electrons present in one square meter cylindrical cross-sectional area in the line of site direction between the satellite and the user on the earth, i.e. Total Electron Content (TEC). In the equatorial and low latitude regions such as India, TEC is often quite high with large spatial gradients. Carrier phase data from the GAGAN network of Indian TEC Stations is used for estimating and identifying ionospheric spatial gradients inmultiple viewing directions. In this paper amongst the satellite signals arriving in multipledirections,Vertical ionospheric gradients (σVIG) are calculated, inturn spatial ionospheric gradients are identified. In addition, estimated temporal gradients, i.e. rate of TEC Index is also compared. These aspects which contribute to errors can be treated for improved GAGAN system performance.
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Gold price effect on stock market: A Markov switching vector error correction approach
Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok
2014-06-01
Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.
DeCesare, A; Secanell, M; Lagravère, M O; Carey, J
2013-01-01
The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.
Backtracking dynamics of RNA polymerase: pausing and error correction
International Nuclear Information System (INIS)
Sahoo, Mamata; Klumpp, Stefan
2013-01-01
Transcription by RNA polymerases is frequently interrupted by pauses. One mechanism of such pauses is backtracking, where the RNA polymerase translocates backward with respect to both the DNA template and the RNA transcript, without shortening the transcript. Backtracked RNA polymerases move in a diffusive fashion and can return to active transcription either by diffusive return to the position where backtracking was initiated or by cleaving the transcript. The latter process also provides a mechanism for proofreading. Here we present some exact results for a kinetic model of backtracking and analyse its impact on the speed and the accuracy of transcription. We show that proofreading through backtracking is different from the classical (Hopfield–Ninio) scheme of kinetic proofreading. Our analysis also suggests that, in addition to contributing to the accuracy of transcription, backtracking may have a second effect: it attenuates the slow down of transcription that arises as a side effect of discriminating between correct and incorrect nucleotides based on the stepping rates. (paper)
CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD
Directory of Open Access Journals (Sweden)
BUSUIOCEANU STELIANA
2013-08-01
Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.
Considerations for pattern placement error correction toward 5nm node
Yaegashi, Hidetami; Oyama, Kenichi; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Koike, Kyohei; Maslow, Mark John; Timoshkov, Vadim; Kiers, Ton; Di Lorenzo, Paolo; Fonseca, Carlos
2017-03-01
Multi-patterning has been adopted widely in high volume manufacturing as 193 immersion extension, and it becomes realistic solution of nano-order scaling. In fact, it must be key technology on single directional (1D) layout design [1] for logic devise and it becomes a major option for further scaling technique in SAQP. The requirement for patterning fidelity control is getting savior more and more, stochastic fluctuation as well as LER (Line edge roughness) has to be micro-scopic observation aria. In our previous work, such atomic order controllability was viable in complemented technique with etching and deposition [2]. Overlay issue form major potion in yield management, therefore, entire solution is needed keenly including alignment accuracy on scanner and detectability on overlay measurement instruments. As EPE (Edge placement error) was defined as the gap between design pattern and contouring of actual pattern edge, pattern registration in single process level must be considerable. The complementary patterning to fabricate 1D layout actually mitigates any process restrictions, however, multiple process step, symbolized as LELE with 193-i, is burden to yield management and affordability. Recent progress of EUV technology is remarkable, and it is major potential solution for such complicated technical issues. EUV has robust resolution limit and it must be definitely strong scaling driver for process simplification. On the other hand, its stochastic variation such like shot noise due to light source power must be resolved with any additional complemented technique. In this work, we examined the nano-order CD and profile control on EUV resist pattern and would introduce excellent accomplishments.
A median filter approach for correcting errors in a vector field
Schultz, H.
1985-01-01
Techniques are presented for detecting and correcting errors in a vector field. These methods employ median filters which are frequently used in image processing to enhance edges and remove noise. A detailed example is given for wind field maps produced by a spaceborne scatterometer. The error detection and replacement algorithm was tested with simulation data from the NASA Scatterometer (NSCAT) project.
Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors
Francois-Éric Racicot; Raymond Théoret; Alain Coen
2006-01-01
In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Quantum error correction of continuous-variable states against Gaussian noise
Energy Technology Data Exchange (ETDEWEB)
Ralph, T. C. [Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072 (Australia)
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
NxRepair: error correction in de novo sequence assembly using Nextera mate pairs
Directory of Open Access Journals (Sweden)
Rebecca R. Murphy
2015-06-01
Full Text Available Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.
DEFF Research Database (Denmark)
Ashraf, Bilal; Janss, Luc; Jensen, Just
sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...
Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness
International Nuclear Information System (INIS)
Park, Jong-Kyu; Schaffer, Michael J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathan E.
2011-01-01
Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.
Error-correction coding and decoding bounds, codes, decoders, analysis and applications
Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak
2017-01-01
This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Directory of Open Access Journals (Sweden)
Jorge Mauricio Reyes Alcalde
2017-04-01
Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Directory of Open Access Journals (Sweden)
Huiliang Cao
2016-01-01
Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
SimCommSys: taking the errors out of error-correcting code simulations
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.
Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra
2016-12-01
The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.
Improving transcriptome assembly through error correction of high-throughput sequence reads
Directory of Open Access Journals (Sweden)
Matthew D. MacManes
2013-07-01
Full Text Available The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.
Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors
Directory of Open Access Journals (Sweden)
Pham Thuy Dung
2016-12-01
Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES
Directory of Open Access Journals (Sweden)
Maria Corazon Saturnina A Castro
2017-10-01
Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices. This paper poses the major problem: How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed. Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.
Biometrics encryption combining palmprint with two-layer error correction codes
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Is a genome a codeword of an error-correcting code?
Directory of Open Access Journals (Sweden)
Luzinete C B Faria
Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies.
Clark, Kevin B
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Directory of Open Access Journals (Sweden)
Kevin Bradley Clark
2013-08-01
Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in
Short-term wind power combined forecasting based on error forecast correction
International Nuclear Information System (INIS)
Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng
2016-01-01
Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed
2016-08-24
to the seven-qubit Steane code [29] and also represents the smallest instance of a 2D topological color code [30]. Since the realized quantum error...Quantum Computations on a Topologically Encoded Qubit, Science 345, 302 (2014). [17] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D...Memory, J. Math . Phys. (N.Y.) 43, 4452 (2002). [20] B. M. Terhal, Quantum Error Correction for Quantum Memories, Rev. Mod. Phys. 87, 307 (2015). [21] D
Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne
2018-03-01
When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
2009-01-01
-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably......This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...
Environment-assisted error correction of single-qubit phase damping
International Nuclear Information System (INIS)
Trendelkamp-Schroer, Benjamin; Helm, Julius; Strunz, Walter T.
2011-01-01
Open quantum system dynamics of random unitary type may in principle be fully undone. Closely following the scheme of environment-assisted error correction proposed by Gregoratti and Werner [J. Mod. Opt. 50, 915 (2003)], we explicitly carry out all steps needed to invert a phase-damping error on a single qubit. Furthermore, we extend the scheme to a mixed-state environment. Surprisingly, we find cases for which the uncorrected state is closer to the desired state than any of the corrected ones.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-09-01
Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.
Fringe order error in multifrequency fringe projection phase unwrapping: reason and correction.
Zhang, Chunwei; Zhao, Hong; Zhang, Lu
2015-11-10
A multifrequency fringe projection phase unwrapping algorithm (MFPPUA) is important to fringe projection profilometry, especially when a discontinuous object is measured. However, a fringe order error (FOE) may occur when MFPPUA is adopted. An FOE will result in error to the unwrapped phase. Although this kind of phase error does not spread, it brings error to the eventual 3D measurement results. Therefore, an FOE or its adverse influence should be obviated. In this paper, reasons for the occurrence of an FOE are theoretically analyzed and experimentally explored. Methods to correct the phase error caused by an FOE are proposed. Experimental results demonstrate that the proposed methods are valid in eliminating the adverse influence of an FOE.
How EFL students can use Google to correct their “untreatable” written errors
Directory of Open Access Journals (Sweden)
Luc Geiller
2014-09-01
Full Text Available This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several “untreatable” written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback leads to more grammatical accuracy. In her response to Truscott (1996, Ferris (1999 explains that it would be unreasonable to abolish correction given the present state of knowledge, and that further research needed to focus on which types of errors were more amenable to which types of error correction. In her attempt to respond more effectively to her students’ errors, she made the distinction between “treatable” and “untreatable” ones: the former occur in “a patterned, rule-governed way” and include problems with verb tense or form, subject-verb agreement, run-ons, noun endings, articles, pronouns, while the latter include a variety of lexical errors, problems with word order and sentence structure, including missing and unnecessary words. Substantial research on the use of search engines as a tool for L2 learners has been carried out suggesting that the web plays an important role in fostering language awareness and learner autonomy (e.g. Shei 2008a, 2008b; Conroy 2010. According to Bathia and Richie (2009: 547, “the application of Google for language learning has just begun to be tapped.” Within the framework of this study it was assumed that the students, conversant with digital technologies and using Google and the web on a regular basis, could use various search options and the search results to self-correct their errors instead of relying on their teacher to provide direct feedback. After receiving some in-class training on how to formulate Google queries, the students were asked to use a customized Google search engine limiting searches to 28 information websites to correct up to
International Nuclear Information System (INIS)
Kim, Y.P.
1982-01-01
The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions
Kromhout, D.
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the
Some errors in respirometry of aquatic breathers: How to avoid and correct for them
DEFF Research Database (Denmark)
STEFFENSEN, JF
1989-01-01
Respirometry in closed and flow-through systems is described with the objective of pointing out problems and sources of errors involved and how to correct for them. Both closed respirometry applied to resting and active animals and intermillent-flow respirometry is described. In addition, flow...
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate
Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping
Á. Piedrafita (Álvaro); J.M. Renes (Joseph)
2017-01-01
textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
Allam, Amin; Kalnis, Panos; Solovyev, Victor
2015-01-01
accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
Czech Academy of Sciences Publication Activity Database
Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.
2013-01-01
Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error-correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188
Fast high resolution ADC based on the flash type with a special error correcting technique
Energy Technology Data Exchange (ETDEWEB)
Xiao-Zhong, Liang; Jing-Xi, Cao [Beijing Univ. (China). Inst. of Atomic Energy
1984-03-01
A fast 12 bits ADC based on the flash type with a simple special error correcting technique which can effectively compensate the level drift of the discriminators and the droop of the stretcher voltage is described. The DNL is comparable with the Wilkinson's ADC and long term drift is far better than its.
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
Czech Academy of Sciences Publication Activity Database
Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.
2013-01-01
Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error -correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy
International Nuclear Information System (INIS)
Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock
2005-01-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle
Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models
Hallin, M.; van den Akker, R.; Werker, B.J.M.
2012-01-01
Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the
Retesting the Limits of Data-Driven Learning: Feedback and Error Correction
Crosthwaite, Peter
2017-01-01
An increasing number of studies have looked at the value of corpus-based data-driven learning (DDL) for second language (L2) written error correction, with generally positive results. However, a potential conundrum for language teachers involved in the process is how to provide feedback on students' written production for DDL. The study looks at…
The dynamics of entry, exit and profitability: an error correction approach for the retail industry
M.A. Carree (Martin); A.R. Thurik (Roy)
1994-01-01
textabstractWe develop a two equation error correction model to investigate determinants of and dynamic interaction between changes in profits and number of firms in retailing. An explicit distinction is made between the effects of actual competition among incumbants, new firms competition and
Links between N-modular redundancy and the theory of error-correcting codes
Bobin, V.; Whitaker, S.; Maki, G.
1992-01-01
N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-06-01
Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.
Correction method for the error of diamond tool's radius in ultra-precision cutting
Wang, Yi; Yu, Jing-chi
2010-10-01
The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.
FMLRC: Hybrid long read error correction using an FM-index.
Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D
2018-02-09
Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.
Neural Network Based Real-time Correction of Transducer Dynamic Errors
Roj, J.
2013-12-01
In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.
Directory of Open Access Journals (Sweden)
Qin Guo-jie
2014-08-01
Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
Bias correction of bounded location errors in presence-only data
Hefley, Trevor J.; Brost, Brian M.; Hooten, Mevin B.
2017-01-01
Location error occurs when the true location is different than the reported location. Because habitat characteristics at the true location may be different than those at the reported location, ignoring location error may lead to unreliable inference concerning species–habitat relationships.We explain how a transformation known in the spatial statistics literature as a change of support (COS) can be used to correct for location errors when the true locations are points with unknown coordinates contained within arbitrary shaped polygons.We illustrate the flexibility of the COS by modelling the resource selection of Whooping Cranes (Grus americana) using citizen contributed records with locations that were reported with error. We also illustrate the COS with a simulation experiment.In our analysis of Whooping Crane resource selection, we found that location error can result in up to a five-fold change in coefficient estimates. Our simulation study shows that location error can result in coefficient estimates that have the wrong sign, but a COS can efficiently correct for the bias.
A two-dimensional matrix correction for off-axis portal dose prediction errors
International Nuclear Information System (INIS)
Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.
2013-01-01
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As
McCarthy, Michael
2009-01-01
This article re-examines the notion of spoken fluency. Fluent and fluency are terms commonly used in everyday, lay language, and fluency, or lack of it, has social consequences. The article reviews the main approaches to understanding and measuring spoken fluency and suggest that spoken fluency is best understood as an interactive achievement, and offers the metaphor of ‘confluence’ to replace the term fluency. Many measures of spoken fluency are internal and monologue-based, whereas evidence...
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Venter, Jan A; Oberholster, Andre; Schallhorn, Steven C; Pelouskova, Martina
2014-04-01
To evaluate refractive and visual outcomes of secondary piggyback intraocular lens implantation in patients diagnosed as having residual ametropia following segmental multifocal lens implantation. Data of 80 pseudophakic eyes with ametropia that underwent Sulcoflex aspheric 653L intraocular lens implantation (Rayner Intraocular Lenses Ltd., East Sussex, United Kingdom) to correct residual refractive error were analyzed. All eyes previously had in-the-bag zonal refractive multifocal intraocular lens implantation (Lentis Mplus MF30, models LS-312 and LS-313; Oculentis GmbH, Berlin, Germany) and required residual refractive error correction. Outcome measurements included uncorrected distance visual acuity, corrected distance visual acuity, uncorrected near visual acuity, distance-corrected near visual acuity, manifest refraction, and complications. One-year data are presented in this study. The mean spherical equivalent ranged from -1.75 to +3.25 diopters (D) preoperatively (mean: +0.58 ± 1.15 D) and reduced to -1.25 to +0.50 D (mean: -0.14 ± 0.28 D; P < .01). Postoperatively, 93.8% of eyes were within ±0.50 D and 98.8% were within ±1.00 D of emmetropia. The mean uncorrected distance visual acuity improved significantly from 0.28 ± 0.16 to 0.01 ± 0.10 logMAR and 78.8% of eyes achieved 6/6 (Snellen 20/20) or better postoperatively. The mean uncorrected near visual acuity changed from 0.43 ± 0.28 to 0.19 ± 0.15 logMAR. There was no significant change in corrected distance visual acuity or distance-corrected near visual acuity. No serious intraoperative or postoperative complications requiring secondary intraocular lens removal occurred. Sulcoflex lenses proved to be a predictable and safe option for correcting residual refractive error in patients diagnosed as having pseudophakia. Copyright 2014, SLACK Incorporated.
Error analysis of motion correction method for laser scanning of moving objects
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
Energy Technology Data Exchange (ETDEWEB)
Kang, Soo Man [Dept. of Radiation Oncology, Kosin University Gospel Hospital, Busan (Korea, Republic of)
2008-09-15
To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.
International Nuclear Information System (INIS)
Kang, Soo Man
2008-01-01
To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.
Beam-Based Error Identification and Correction Methods for Particle Accelerators
AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas
2014-06-10
Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present
Directory of Open Access Journals (Sweden)
Nicolai Lang, Hans Peter Büchler
2018-01-01
Full Text Available Active quantum error correction on topological codes is one of the most promising routes to long-term qubit storage. In view of future applications, the scalability of the used decoding algorithms in physical implementations is crucial. In this work, we focus on the one-dimensional Majorana chain and construct a strictly local decoder based on a self-dual cellular automaton. We study numerically and analytically its performance and exploit these results to contrive a scalable decoder with exponentially growing decoherence times in the presence of noise. Our results pave the way for scalable and modular designs of actively corrected one-dimensional topological quantum memories.
IMPACT OF TRADE OPENNESS ON OUTPUT GROWTH: CO INTEGRATION AND ERROR CORRECTION MODEL APPROACH
Directory of Open Access Journals (Sweden)
Asma Arif
2012-01-01
Full Text Available This study analyzed the long run relationship between trade openness and output growth for Pakistan using annual time series data for 1972-2010. This study follows the Engle and Granger co integration analysis and error correction approach to analyze the long run relationship between the two variables. The Error Correction Term (ECT for output growth and trade openness is significant at 5% level of significance and indicates a positive long run relation between the variables. This study has also analyzed the causality between trade openness and output growth by using granger causality test. The results of granger causality show that there is a bi-directional significant relationship between trade openness and economic growth.
Directory of Open Access Journals (Sweden)
Mahmudul Mannan Toy
2011-01-01
Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
High speed and adaptable error correction for megabit/s rate quantum key distribution.
Dixon, A R; Sato, H
2014-12-02
Quantum Key Distribution is moving from its theoretical foundation of unconditional security to rapidly approaching real world installations. A significant part of this move is the orders of magnitude increases in the rate at which secure key bits are distributed. However, these advances have mostly been confined to the physical hardware stage of QKD, with software post-processing often being unable to support the high raw bit rates. In a complete implementation this leads to a bottleneck limiting the final secure key rate of the system unnecessarily. Here we report details of equally high rate error correction which is further adaptable to maximise the secure key rate under a range of different operating conditions. The error correction is implemented both in CPU and GPU using a bi-directional LDPC approach and can provide 90-94% of the ideal secure key rate over all fibre distances from 0-80 km.
Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.
Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold
2002-02-01
The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).
Haptic Data Processing for Teleoperation Systems: Prediction, Compression and Error Correction
Lee, Jae-young
2013-01-01
This thesis explores haptic data processing methods for teleoperation systems, including prediction, compression, and error correction. In the proposed haptic data prediction method, unreliable network conditions, such as time-varying delay and packet loss, are detected by a transport layer protocol. Given the information from the transport layer, a Bayesian approach is introduced to predict position and force data in haptic teleoperation systems. Stability of the proposed method within stoch...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal
2013-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
DEFF Research Database (Denmark)
Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal
2012-01-01
We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...
Directory of Open Access Journals (Sweden)
Christian NZENGUE PEGNET
2011-07-01
Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.
Khairul Jauhari; Achmad Widodo; Ismoyo Haryanto
2015-01-01
In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by resi...
Directory of Open Access Journals (Sweden)
Rosa M. Manchón
2010-06-01
Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la
Directory of Open Access Journals (Sweden)
Sarunya Kanjanawattana
2017-01-01
Full Text Available literature. Extracting graph information clearly contributes to readers, who are interested in graph information interpretation, because we can obtain significant information presenting in the graph. A typical tool used to transform image-based characters to computer editable characters is optical character recognition (OCR. Unfortunately, OCR cannot guarantee perfect results, because it is sensitive to noise and input quality. This becomes a serious problem because misrecognition provides misunderstanding information to readers and causes misleading communication. In this study, we present a novel method for OCR-error correction based on bar graphs using semantics, such as ontologies and dependency parsing. Moreover, we used a graph component extraction proposed in our previous study to omit irrelevant parts from graph components. It was applied to clean and prepare input data for this OCR-error correction. The main objectives of this paper are to extract significant information from the graph using OCR and to correct OCR errors using semantics. As a result, our method provided remarkable performance with the highest accuracies and F-measures. Moreover, we examined that our input data contained less of noise because of an efficiency of our graph component extraction. Based on the evidence, we conclude that our solution to the OCR problem achieves the objectives.
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Energy Technology Data Exchange (ETDEWEB)
Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)
2015-06-23
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
International Nuclear Information System (INIS)
Rota Kops, Elena; Herzog, Hans
2013-01-01
Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled
Energy Technology Data Exchange (ETDEWEB)
Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)
2011-11-10
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
Rota Kops, Elena; Herzog, Hans
2013-02-01
AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal
Error correcting code with chip kill capability and power saving enhancement
Energy Technology Data Exchange (ETDEWEB)
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2010-06-01
The dynamics of correct and error responses in a variant of delayed free recall were examined in the present study. In the externalized free recall paradigm, participants were presented with lists of words and were instructed to subsequently recall not only the words that they could remember from the most recently presented list, but also any other words that came to mind during the recall period. Externalized free recall is useful for elucidating both sampling and postretrieval editing processes, thereby yielding more accurate estimates of the total number of error responses, which are typically sampled and subsequently edited during free recall. The results indicated that the participants generally sampled correct items early in the recall period and then transitioned to sampling more erroneous responses. Furthermore, the participants generally terminated their search after sampling too many errors. An examination of editing processes suggested that the participants were quite good at identifying errors, but this varied systematically on the basis of a number of factors. The results from the present study are framed in terms of generate-edit models of free recall.
Image enhancement by spectral-error correction for dual-energy computed tomography.
Park, Kyung-Kook; Oh, Chang-Hyun; Akay, Metin
2011-01-01
Dual-energy CT (DECT) was reintroduced recently to use the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between low and high energy images or measurements, so that it is difficult to acquire accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, an image enhancement technique for DECT is proposed, based on the fact that the attenuation of a higher density material decreases more rapidly as X-ray energy increases. We define as spectral error the case when a pixel pair of low and high energy images deviates far from the expected attenuation trend. After analyzing the spectral-error sources of DECT images, we propose a DECT image enhancement method, which consists of three steps: water-reference offset correction, spectral-error correction, and anti-correlated noise reduction. It is the main idea of this work that makes spectral errors distributed like random noise over the true attenuation and suppressed by the well-known anti-correlated noise reduction. The proposed method suppressed noise of liver lesions and improved contrast between liver lesions and liver parenchyma in DECT contrast-enhanced abdominal images and their two-material decomposition.
International Nuclear Information System (INIS)
Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C
2017-01-01
The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)
International Nuclear Information System (INIS)
Pisani, Laura; Lockman, David; Jaffray, David; Yan Di; Martinez, Alvaro; Wong, John
2000-01-01
Purpose: We hypothesize that the difference in image quality between the traditional kilovoltage (kV) prescription radiographs and megavoltage (MV) treatment radiographs is a major factor hindering our ability to accurately measure, thus correct, setup error in radiation therapy. The objective of this work is to study the accuracy of on-line correction of setup errors achievable using either kV- or MV-localization (i.e., open-field) radiographs. Methods and Materials: Using a gantry mounted kV and MV dual-beam imaging system, the accuracy of on-line measurement and correction of setup error using electronic kV- and MV-localization images was examined based on anthropomorphic phantom and patient imaging studies. For the phantom study, the user's ability to accurately detect known translational shifts was analyzed. The clinical study included 14 patients with disease in the head and neck, thoracic, and pelvic regions. For each patient, 4 orthogonal kV radiographs acquired during treatment simulation from the right lateral, anterior-to-posterior, left lateral, and posterior-to-anterior directions were employed as reference prescription images. Two-dimensional (2D) anatomic templates were defined on each of the 4 reference images. On each treatment day, after positioning the patient for treatment, 4 orthogonal electronic localization images were acquired with both kV and 6-MV photon beams. On alternate weeks, setup errors were determined from either the kV- or MV-localization images but not both. Setup error was determined by aligning each 2D template with the anatomic information on the corresponding localization image, ignoring rotational and nonrigid variations. For each set of 4 orthogonal images, the results from template alignments were averaged. Based on the results from the phantom study and a parallel study of the inter- and intraobserver template alignment variability, a threshold for minimum correction was set at 2 mm in any direction. Setup correction was
International Nuclear Information System (INIS)
Wu Yan; Shannon, Mark A.
2006-01-01
The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed
Energy Technology Data Exchange (ETDEWEB)
Moon, Hyeon Seok; Jeong, Deok Yang; Do, Gyeong Min; Lee, Yeong Cheol; KIm, Sun Myung; Kim, Young Bun [Dept. of Radiation Oncology, Korea University Guro Hospital, Seoul (Korea, Republic of)
2016-12-15
The purpose of this study was to evaluate the Retro recon in SRS planning using BranLAB when stereotactic location error occurs by metal artifact. By CT simulator, image were acquired from head phantom(CIRS, PTW, USA). To observe stereotactic location recognizing and beam hardening, CT image were approved by SRS planning system(BrainLAB, Feldkirchen, Germany). In addition, we compared acquisition image(1.25mm slice thickness) and Retro recon image(using for 2.5 mm, 5mm slice thickness). To evaluate these three images quality, the test were performed by AAPM phantom study. In patient, it was verified stereotactic location error. All the location recognizing error did not occur in scanned image of phantom. AAPM phantom scan images all showed the same trend. Contrast resolution and Spatial resolution are under 6.4 mm, 1.0 mm. In case of noise and uniformity, under 11, 5 of HU were measured. In patient, the stereotactic location error was not occurred at reconstructive image. For BrainLAB planning, using Retro recon were corrected stereotactic error at beam hardening. Retro recon may be the preferred modality for radiation treatment planning and approving image quality.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface.
McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M
2018-01-01
The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.
Correction of clock errors in seismic data using noise cross-correlations
Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline
2017-04-01
Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock
Correction for dynamic bias error in transmission measurements of void fraction
International Nuclear Information System (INIS)
Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.
2012-01-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.
Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.
Song, Li; Florea, Liliana
2015-01-01
Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.
Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-08-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.
Range walk error correction and modeling on Pseudo-random photon counting system
Shen, Shanshan; Chen, Qian; He, Weiji
2017-08-01
Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.
Cooperative MIMO communication at wireless sensor network: an error correcting code approach.
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.
A modified error correction protocol for CCITT signalling system no. 7 on satellite links
Kreuer, Dieter; Quernheim, Ulrich
1991-10-01
Comite Consultatif International des Telegraphe et Telephone (CCITT) Signalling System No. 7 (SS7) provides a level 2 error correction protocol particularly suited for links with propagation delays higher than 15 ms. Not being originally designed for satellite links, however, the so called Preventive Cyclic Retransmission (PCR) Method only performs well on satellite channels when traffic is low. A modified level 2 error control protocol, termed Fix Delay Retransmission (FDR) method is suggested which performs better at high loads, thus providing a more efficient use of the limited carrier capacity. Both the PCR and the FDR methods are investigated by means of simulation and results concerning throughput, queueing delay, and system delay, respectively. The FDR method exhibits higher capacity and shorter delay than the PCR method.
International Nuclear Information System (INIS)
Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun
2014-01-01
Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Identifying and Correcting Timing Errors at Seismic Stations in and around Iran
International Nuclear Information System (INIS)
Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee
2017-01-01
A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.
Error and corrections with scintigraphic measurement of gastric emptying of solid foods
Energy Technology Data Exchange (ETDEWEB)
Meyer, J.H.; Van Deventer, G.; Graham, L.S.; Thomson, J.; Thomasson, D.
1983-03-01
Previous methods for correction of depth used geometric means of simultaneously obtained anterior and posterior counts. The present study compares this method with a new one that uses computations of depth based on peak-to-scatter (P:S) ratios. Six normal volunteers were fed a meal of beef stew, water, and chicken liver that had been labeled in vivo with both In-113m and Tc-99m. Gastric emptying was followed at short intervals with anterior counts of peak and scattered radiation for each nuclide, as well as posteriorly collected peak counts from the gastric ROI. Depth of the nuclides was estimated by the P:S method as well as the older method. Both gave similar results. Errors from septal penetration or scatter proved to be a significantly larger problem than errors from changes in depth.
Correction of phase-shifting error in wavelength scanning digital holographic microscopy
Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-05-01
Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.
Correction of electrode modelling errors in multi-frequency EIT imaging.
Jehl, Markus; Holder, David
2016-06-01
The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.
Otero-de-la-Roza, A; Johnson, Erin R; DiLabio, Gino A
2014-12-09
Halogen bonds are formed when a Lewis base interacts with a halogen atom in a different molecule, which acts as an electron acceptor. Due to its charge transfer component, halogen bonding is difficult to model using many common density-functional approximations because they spuriously overstabilize halogen-bonded dimers. It has been suggested that dispersion-corrected density functionals are inadequate to describe halogen bonding. In this work, we show that the exchange-hole dipole moment (XDM) dispersion correction coupled with functionals that minimize delocalization error (for instance, BH&HLYP, but also other half-and-half functionals) accurately model halogen-bonded interactions, with average errors similar to other noncovalent dimers with less charge-transfer effects. The performance of XDM is evaluated for three previously proposed benchmarks (XB18 and XB51 by Kozuch and Martin, and the set proposed by Bauzá et al.) spanning a range of binding energies up to ∼50 kcal/mol. The good performance of BH&HLYP-XDM is comparable to M06-2X, and extends to the "extreme" cases in the Bauzá set. This set contains anionic electron donors where charge transfer occurs even at infinite separation, as well as other charge transfer dimers belonging to the pnictogen and chalcogen bonding classes. We also show that functional delocalization error results in an overly delocalized electron density and exact-exchange hole. We propose intermolecular Bader delocalization indices as an indicator of both the donor-acceptor character of an intermolecular interaction and the delocalization error coming from the underlying functional.
A fingerprint key binding algorithm based on vector quantization and error correction
Li, Liang; Wang, Qian; Lv, Ke; He, Ning
2012-04-01
In recent years, researches on seamless combination cryptosystem with biometric technologies, e.g. fingerprint recognition, are conducted by many researchers. In this paper, we propose a binding algorithm of fingerprint template and cryptographic key to protect and access the key by fingerprint verification. In order to avoid the intrinsic fuzziness of variant fingerprints, vector quantization and error correction technique are introduced to transform fingerprint template and then bind with key, after a process of fingerprint registration and extracting global ridge pattern of fingerprint. The key itself is secure because only hash value is stored and it is released only when fingerprint verification succeeds. Experimental results demonstrate the effectiveness of our ideas.
Likelihood-based inference for cointegration with nonlinear error-correction
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders Christian
2010-01-01
We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...
On the roles of direct feedback and error field correction in stabilizing resistive-wall modes
International Nuclear Information System (INIS)
In, Y.; Bogatu, I.N.; Kim, J.S.; Garofalo, A.M.; Jackson, G.L.; La Haye, R.J.; Schaffer, M.J.; Strait, E.J.; Lanctot, M.J.; Reimerdes, H.; Marrelli, L.; Martin, P.; Okabayashi, M.
2010-01-01
Active feedback control in the DIII-D tokamak has fully stabilized the current-driven ideal kink resistive-wall mode (RWM). While complete stabilization is known to require both low frequency error field correction (EFC) and high frequency feedback, unambiguous identification has been made about the distinctive role of each in a fully feedback-stabilized discharge. Specifically, the role of direct RWM feedback, which nullifies the RWM perturbation in a time scale faster than the mode growth time, cannot be replaced by low frequency EFC, which minimizes the lack of axisymmetry of external magnetic fields. (letter)
Confidentiality of 2D Code using Infrared with Cell-level Error Correction
Directory of Open Access Journals (Sweden)
Nobuyuki Teraura
2013-03-01
Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.
Error Correction of Meteorological Data Obtained with Mini-AWSs Based on Machine Learning
Directory of Open Access Journals (Sweden)
Ji-Hun Ha
2018-01-01
Full Text Available Severe weather events occur more frequently due to climate change; therefore, accurate weather forecasts are necessary, in addition to the development of numerical weather prediction (NWP of the past several decades. A method to improve the accuracy of weather forecasts based on NWP is the collection of more meteorological data by reducing the observation interval. However, in many areas, it is economically and locally difficult to collect observation data by installing automatic weather stations (AWSs. We developed a Mini-AWS, much smaller than AWSs, to complement the shortcomings of AWSs. The installation and maintenance costs of Mini-AWSs are lower than those of AWSs; Mini-AWSs have fewer spatial constraints with respect to the installation than AWSs. However, it is necessary to correct the data collected with Mini-AWSs because they might be affected by the external environment depending on the installation area. In this paper, we propose a novel error correction of atmospheric pressure data observed with a Mini-AWS based on machine learning. Using the proposed method, we obtained corrected atmospheric pressure data, reaching the standard of the World Meteorological Organization (WMO; ±0.1 hPa, and confirmed the potential of corrected atmospheric pressure data as an auxiliary resource for AWSs.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface
Directory of Open Access Journals (Sweden)
McCafferty SJ
2018-01-01
Full Text Available Sean J McCafferty,1–4 Eniko T Enikov,5 Jim Schwiegerling,2,3 Sean M Ashley1,3 1Intuor Technologies, 2Department of Ophthalmology, University of Arizona College of Medicine, 3University of Arizona College of Optical Science, 4Arizona Eye Consultants, 5Department of Mechanical and Aerospace, University of Arizona College of Engineering, Tucson, AZ, USA Purpose: The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT prism and in a correcting applanation tonometry surface (CATS prism.Methods: The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms.Results: The CATS prism tear film adhesion error (2.74±0.21 mmHg was significantly less than the GAT prism (4.57±0.18 mmHg, p<0.001. Tear film adhesion error was independent of applanation mire thickness (R2=0.09, p=0.04. Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p<0.001. Cadaver eye validation indicated the CATS prism’s tear film adhesion error (1.40±0.51 mmHg was significantly less than that of the GAT prism (3.30±0.38 mmHg; p=0.002.Conclusion: Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error by ~41%. Fluorescein solution increases the tear film adhesion compared to
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
Liebovitch, Larry
1998-03-01
evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
International Nuclear Information System (INIS)
Doherty, W.
2015-01-01
A nebulizer-centric instrument response function model of the plasma mass spectrometer was combined with a signal drift model, and the result was used to identify the causes of the non-spectroscopic determinate errors remaining in mass bias-corrected Pb isotope ratios (Tl as internal standard) measured using a multi-collector plasma mass spectrometer. Model calculations, confirmed by measurement, show that the detectable time-dependent errors are a result of the combined effect of signal drift and differences in the coordinates of the Pb and Tl response function maxima (horizontal offset effect). If there are no horizontal offsets, then the mass bias-corrected isotope ratios are approximately constant in time. In the absence of signal drift, the response surface curvature and horizontal offset effects are responsible for proportional errors in the mass bias-corrected isotope ratios. The proportional errors will be different for different analyte isotope ratios and different at every instrument operating point. Consequently, mass bias coefficients calculated using different isotope ratios are not necessarily equal. The error analysis based on the combined model provides strong justification for recommending a three step correction procedure (mass bias correction, drift correction and a proportional error correction, in that order) for isotope ratio measurements using a multi-collector plasma mass spectrometer
Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.
2009-04-01
In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar
Quantum states and their marginals. From multipartite entanglement to quantum error-correcting codes
International Nuclear Information System (INIS)
Huber, Felix Michael
2017-01-01
At the heart of the curious phenomenon of quantum entanglement lies the relation between the whole and its parts. In my thesis, I explore different aspects of this theme in the multipartite setting by drawing connections to concepts from statistics, graph theory, and quantum error-correcting codes: first, I address the case when joint quantum states are determined by their few-body parts and by Jaynes' maximum entropy principle. This can be seen as an extension of the notion of entanglement, with less complex states already being determined by their few-body marginals. Second, I address the conditions for certain highly entangled multipartite states to exist. In particular, I present the solution of a long-standing open problem concerning the existence of an absolutely maximally entangled state on seven qubits. This sheds light on the algebraic properties of pure quantum states, and on the conditions that constrain the sharing of entanglement amongst multiple particles. Third, I investigate Ulam's graph reconstruction problems in the quantum setting, and obtain legitimacy conditions of a set of states to be the reductions of a joint graph state. Lastly, I apply and extend the weight enumerator machinery from quantum error correction to investigate the existence of codes and highly entangled states in higher dimensions. This clarifies the physical interpretation of the weight enumerators and of the quantum MacWilliams identity, leading to novel applications in multipartite entanglement.
Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor
Directory of Open Access Journals (Sweden)
Fang Tang
2014-01-01
Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.
Two-step single slope/SAR ADC with error correction for CMOS image sensor.
Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin
2014-01-01
Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μ m(2) · cycles/sample.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Writing and Speech Recognition : Observing Error Correction Strategies of Professional Writers
Leijten, M.A.J.C.
2007-01-01
In this thesis we describe the organization of speech recognition based writing processes. Writing can be seen as a visual representation of spoken language: a combination that speech recognition takes full advantage of. In the field of writing research, speech recognition is a new writing
Zhang, Zhanjun
2004-01-01
Comment: The wrong mutual information, quantum bit error rate and secure transmission efficiency in Wojcik's eavesdropping scheme [PRL90(03)157901]on ping-pong protocol have been pointed out and corrected
Phase correction and error estimation in InSAR time series analysis
Zhang, Y.; Fattahi, H.; Amelung, F.
2017-12-01
During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same
Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc
International Nuclear Information System (INIS)
Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.
1983-11-01
Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10 30 cm -2 sec -1 requires focusing the interaction bunches to a spot size in the micrometer (μm) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-07-24
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Directory of Open Access Journals (Sweden)
Xingming Sun
2015-07-01
Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Directory of Open Access Journals (Sweden)
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
A correction for emittance-measurement errors caused by finite slit and collector widths
International Nuclear Information System (INIS)
Connolly, R.C.
1992-01-01
One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs
International Nuclear Information System (INIS)
Glasure, Yong U.; Lee, Aie-Rie
1998-01-01
This paper examines the causality issue between energy consumption and GDP for South Korea and Singapore, with the aid of cointegration and error-correction modeling. Results of the cointegration and error-correction models indicate bidirectional causality between GDP and energy consumption for both South Korea and Singapore. However, results of the standard Granger causality tests show no causal relationship between GDP and energy consumption for South Korea and unidirectional causal relationship from energy consumption to GDP for Singapore
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
International Nuclear Information System (INIS)
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy.
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy
2012-06-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.
Directory of Open Access Journals (Sweden)
Amir H Pakpour
2013-01-01
Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.
5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.
2010-01-01
... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Can I cancel my FERS election if my qualifying retirement coverage error was previously corrected and I now have an election opportunity under... ERRONEOUS RETIREMENT COVERAGE CORRECTIONS ACT Making an Election Fers Elections § 839.622 Can I cancel my...
MODEL PERMINTAAN UANG DI INDONESIA DENGAN PENDEKATAN VECTOR ERROR CORRECTION MODEL
Directory of Open Access Journals (Sweden)
imam mukhlis
2016-09-01
Full Text Available This research aims to estimate the demand for money model in Indonesia for 2005.2-2015.12. The variables used in this research are ; demand for money, interest rate, inflation, and exchange rate (IDR/US$. The stationary test with ADF used to test unit root in the data. Cointegration test applied to estimate the long run relationship berween variables. This research employed the Vector Error Correction Model (VECM to estimate the money demand model in Indonesia. The results showed that all the data was stationer at the difference level (1%. There were long run relationship between interest rate, inflation and exchange rate to demand for money in Indonesia. The VECM model could not explaine interaction between explanatory variables to independent variables. In the short run, there were not relationship between interest rate, inflation and exchange rate to demand for money in Indonesia for 2005.2-2015.12
A Conceptual Design Study for the Error Field Correction Coil Power Supply in JT-60SA
International Nuclear Information System (INIS)
Matsukawa, M.; Shimada, K.; Yamauchi, K.; Gaio, E.; Ferro, A.; Novello, L.
2013-01-01
This paper describes a conceptual design study for the circuit configuration of the Error Field Correction Coil (EFCC) power supply (PS) to maximize the expected performance with reasonable cost in JT-60SA. The EFCC consists of eighteen sector coils installed inside the vacuum vessel, six in the toroidal direction and three in the poloidal direction, each one rated for 30 kA-turn. As a result, star point connection is proposed for each group of six EFCC coils installed cyclically in the toroidal direction for decoupling with poloidal field coils. In addition, a six phase inverter which is capable of controlling each phase current was chosen as PS topology to ensure higher flexibility of operation with reasonable cost.
Correcting the error in neutron moisture probe measurements caused by a water density gradient
International Nuclear Information System (INIS)
Wilson, D.J.
1988-01-01
If a neutron probe lies in or near a water density gradient, the probe may register a water density different to that at the measuring point. The effect of a thin stratum of soil containing an excess or depletion of water at various distances from a probe in an otherwise homogeneous system has been calculated, producing an 'importance' curve. The effect of these strata can be integrated over the soil region in close proximity to the probe resulting in the net effect of the presence of a water density gradient. In practice, the probe is scanned through the point of interest and the count rate at that point is corrected for the influence of the water density on each side of it. An example shows that the technique can reduce an error of 10 per cent to about 2 per cent
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
DEFF Research Database (Denmark)
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which...... already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants...... and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...
Bound on quantum computation time: Quantum error correction in a critical environment
International Nuclear Information System (INIS)
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2010-01-01
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.
Precursors, gauge invariance, and quantum error correction in AdS/CFT
Energy Technology Data Exchange (ETDEWEB)
Freivogel, Ben; Jefferson, Robert A.; Kabir, Laurens [ITFA and GRAPPA, Universiteit van Amsterdam,Science Park 904, Amsterdam (Netherlands)
2016-04-19
A puzzling aspect of the AdS/CFT correspondence is that a single bulk operator can be mapped to multiple different boundary operators, or precursors. By improving upon a recent model of Mintun, Polchinski, and Rosenhaus, we demonstrate explicitly how this ambiguity arises in a simple model of the field theory. In particular, we show how gauge invariance in the boundary theory manifests as a freedom in the smearing function used in the bulk-boundary mapping, and explicitly show how this freedom can be used to localize the precursor in different spatial regions. We also show how the ambiguity can be understood in terms of quantum error correction, by appealing to the entanglement present in the CFT. The concordance of these two approaches suggests that gauge invariance and entanglement in the boundary field theory are intimately connected to the reconstruction of local operators in the dual spacetime.
Algebra for applications cryptography, secret sharing, error-correcting, fingerprinting, compression
Slinko, Arkadii
2015-01-01
This book examines the relationship between mathematics and data in the modern world. Indeed, modern societies are awash with data which must be manipulated in many different ways: encrypted, compressed, shared between users in a prescribed manner, protected from an unauthorised access and transmitted over unreliable channels. All of these operations can be understood only by a person with knowledge of basics in algebra and number theory. This book provides the necessary background in arithmetic, polynomials, groups, fields and elliptic curves that is sufficient to understand such real-life applications as cryptography, secret sharing, error-correcting, fingerprinting and compression of information. It is the first to cover many recent developments in these topics. Based on a lecture course given to third-year undergraduates, it is self-contained with numerous worked examples and exercises provided to test understanding. It can additionally be used for self-study.
Potts glass reflection of the decoding threshold for qudit quantum error correcting codes
Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.
We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).
Directory of Open Access Journals (Sweden)
Akhsyim Afandi
2017-03-01
Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.
Estimating oil product demand in Indonesia using a cointegrating error correction model
International Nuclear Information System (INIS)
Dahl, C.
2001-01-01
Indonesia's long oil production history and large population mean that Indonesian oil reserves, per capita, are the lowest in OPEC and that, eventually, Indonesia will become a net oil importer. Policy-makers want to forestall this day, since oil revenue comprised around a quarter of both the government budget and foreign exchange revenues for the fiscal years 1997/98. To help policy-makers determine how economic growth and oil-pricing policy affect the consumption of oil products, we estimate the demand for six oil products and total petroleum consumption, using an error correction-cointegration approach, and compare it with estimates on a lagged endogenous model using data for 1970-95. (author)
Tripartite entanglement in qudit stabilizer states and application in quantum error correction
Energy Technology Data Exchange (ETDEWEB)
Looi, Shiang Yong; Griffiths, Robert B. [Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
Consider a stabilizer state on n qudits, each of dimension D with D being a prime or squarefree integer, divided into three mutually disjoint sets or parts. Generalizing a result of Bravyi et al.[J. Math. Phys. 47, 062106 (2006)] for qubits (D=2), we show that up to local unitaries, the three parts of the state can be written as tensor product of unentangled signle-qudit states, maximally entangled Einstein-Podolsky-Rosen (EPR) pairs, and tripartite Greenberger-Horne-Zeilinger (GHZ) states. We employ this result to obtain a complete characterization of the properties of a class of channels associated with stabilizer error-correcting codes, along with their complementary channels.
Directory of Open Access Journals (Sweden)
Sasanti Widyawati
2016-05-01
Full Text Available AbstractBank loans has an important role in financing the national economy and driving force of economic growth.Therefore, credit growth must be balanced. However, the condition show that commercial bank credit growthslowed back.Using the method of Error Correction Model (ECM Domowitz - El Badawi, the study analyze theimpact of short-term and long-term independent variables to determine the credit growth in Indonesia financialsector. The results show that, in the short term only non performing loans are significant negative effect onthe working capital loans growth. For long-term, working capital loan interest rates have a significant negativeeffect, third party funds growth have a significant positive effect and inflation have a significant negativeeffect.
Spoken Grammar for Chinese Learners
Institute of Scientific and Technical Information of China (English)
徐晓敏
2013-01-01
Currently, the concept of spoken grammar has been mentioned among Chinese teachers. However, teach-ers in China still have a vague idea of spoken grammar. Therefore this dissertation examines what spoken grammar is and argues that native speakers’ model of spoken grammar needs to be highlighted in the classroom teaching.
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.
1996-01-01
Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.
The importance of matched poloidal spectra to error field correction in DIII-D
Energy Technology Data Exchange (ETDEWEB)
Paz-Soldan, C., E-mail: paz-soldan@fusion.gat.com; Lanctot, M. J.; Buttery, R. J.; La Haye, R. J.; Strait, E. J. [General Atomics, P.O. Box 85608, San Diego, California 92121 (United States); Logan, N. C.; Park, J.-K.; Solomon, W. M. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Shiraki, D.; Hanson, J. M. [Department of Applied Physics and Applied Mathematics, Columbia University, New York, New York 10027 (United States)
2014-07-15
Optimal error field correction (EFC) is thought to be achieved when coupling to the least-stable “dominant” mode of the plasma is nulled at each toroidal mode number (n). The limit of this picture is tested in the DIII-D tokamak by applying superpositions of in- and ex-vessel coil set n = 1 fields calculated to be fully orthogonal to the n = 1 dominant mode. In co-rotating H-mode and low-density Ohmic scenarios, the plasma is found to be, respectively, 7× and 20× less sensitive to the orthogonal field as compared to the in-vessel coil set field. For the scenarios investigated, any geometry of EFC coil can thus recover a strong majority of the detrimental effect introduced by the n = 1 error field. Despite low sensitivity to the orthogonal field, its optimization in H-mode is shown to be consistent with minimizing the neoclassical toroidal viscosity torque and not the higher-order n = 1 mode coupling.
The use of concept maps to detect and correct concept errors (mistakes
Directory of Open Access Journals (Sweden)
Ladislada del Puy Molina Azcárate
2013-02-01
Full Text Available This work proposes to detect and correct concept errors (EECC to obtain Meaningful Learning (AS. The Conductive Model does not respond to the demand of meaningful learning that implies gathering thought, feeling and action to lead students up to both compromise and responsibility. In order to respond to the society competition about knowledge and information it is necessary to change the way of teaching and learning (from conductive model to constructive model. In this context it is important not only to learn meaningfully but also to create knowledge so as to developed dissertive, creative and critical thought, and the EECC are and obstacle to cope with this. This study tries to get ride of EECC in order to get meaningful learning. For this, it is essential to elaborate a Teaching Module (MI. This teaching Module implies the treatment of concept errors by a teacher able to change the dynamic of the group in the classroom. This M.I. was used among sixth grade primary school and first grade secondary school in some state assisted schools in the North of Argentina (Tucumán and Jujuy. After evaluation, the results showed great and positive changes among the experimental groups taking into account the attitude and the academic results. Meaningful Learning was shown through pupilʼs creativity, expressions and also their ability of putting this into practice into everyday life.
Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)
Maulia, Eva; Miftahuddin; Sofyan, Hizir
2018-05-01
A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.
Development and characterisation of FPGA modems using forward error correction for FSOC
Mudge, Kerry A.; Grant, Kenneth J.; Clare, Bradley A.; Biggs, Colin L.; Cowley, William G.; Manning, Sean; Lechner, Gottfried
2016-05-01
In this paper we report on the performance of a free-space optical communications (FSOC) modem implemented in FPGA, with data rate variable up to 60 Mbps. To combat the effects of atmospheric scintillation, a 7/8 rate low density parity check (LDPC) forward error correction is implemented along with custom bit and frame synchronisation and a variable length interleaver. We report on the systematic performance evaluation of an optical communications link employing the FPGA modems using a laboratory test-bed to simulate the effects of atmospheric turbulence. Log-normal fading is imposed onto the transmitted free-space beam using a custom LabVIEW program and an acoustic-optic modulator. The scintillation index, transmitted optical power and the scintillation bandwidth can all be independently varied allowing testing over a wide range of optical channel conditions. In particular, bit-error-ratio (BER) performance for different interleaver lengths is investigated as a function of the scintillation bandwidth. The laboratory results are compared to field measurements over 1.5km.
An investigation of coupling of the internal kink mode to error field correction coils in tokamaks
International Nuclear Information System (INIS)
Lazarus, E.A.
2013-01-01
The coupling of the internal kink to an external m/n = 1/1 perturbation is studied for profiles that are known to result in a saturated internal kink in the limit of a cylindrical tokamak. It is found from three-dimensional equilibrium calculations that, for A ≈ 30 circular plasmas and A ≈ 3 elliptical shapes, this coupling of the boundary perturbation to the internal kink is strong; i.e., the amplitude of the m/n = 1/1 structure at q = 1 is large compared with the amplitude applied at the plasma boundary. Evidence suggests that this saturated internal kink, resulting from small field errors, is an explanation for the TEXTOR and JET measurements of q 0 remaining well below unity throughout the sawtooth cycle, as well as the distinction between sawtooth effects on the q-profile observed in TEXTOR and DIII-D. It is proposed that this excitation, which could readily be applied with error field correction coils, be explored as a mechanism for controlling sawtooth amplitudes in high-performance tokamak discharges. This result is then combined with other recent tokamak results to propose an L-mode approach to fusion in tokamaks. (paper)
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Energy Technology Data Exchange (ETDEWEB)
Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL
International Nuclear Information System (INIS)
Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin
2009-01-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.
Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data
Directory of Open Access Journals (Sweden)
Jinhua Han
2017-01-01
Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.
Initial Results of Using Daily CT Localization to Correct Portal Error in Prostate Cancer
International Nuclear Information System (INIS)
Lattanzi, Joseph; McNeely, Shawn; Barnes, Scott; Das, Indra; Schultheiss, Timothy E; Hanks, Gerald E.
1997-01-01
Purpose: To evaluate the use of daily CT simulation in prostate cancer to correct errors in portal placement and organ motion. Improved localization with this technique should allow the reduction of target margins and facilitate dose escalation in high risk patients while minimizing the risk of normal tissue morbidity. Methods and Materials : Five patients underwent standard CT simulation with the alpha cradle cast, IV contrast, and urethrogram. All were initially treated to 46 Gy in a four field conformal technique which included the prostate, seminal vesicles and pelvic lymph nodes (GTV 1 ). The prostate or prostate and seminal vesicles (GTV 2 ) then received 56 Gy with a 1.0 cm margin to the PTV. At 50 Gy a second CT simulation was performed with IV contrast, urethrogram and the alpha cradle secured to a rigid sliding board. The prostate was contoured, a new isocenter generated, and surface markers placed. Prostate only treatment portals for the final conedown (GTV 3 ) were created with 0.25 cm isodose margins to the PTV. The final six fractions in 2 patients with favorable disease and eight fractions in 3 patients with unfavorable disease were delivered using the daily CT technique. On each treatment day the patient was placed in his cast on the sliding board and a CT scan performed. The daily isocenter was calculated in the A/P and lateral dimension and compared to the 50 Gy CT simulation isocenter. Couch and surface marker shifts were calculated to produce perfect portal alignment. To maintain positioning, the patient was transferred to a gurney while on the sliding board in his cast, transported to the treatment room and then transferred to the treatment couch. The patient was then treated to the corrected isocenter. Portal films and real time images were obtained for each portal. Results: Utilizing CT-CT image registration (fusion) of the daily and 50 Gy baseline CT scans the isocenter changes were quantified to reflect the contribution of positional
Optical correction of refractive error for preventing and treating eye symptoms in computer users.
Heus, Pauline; Verbeek, Jos H; Tikka, Christina
2018-04-10
Computer users frequently complain about problems with seeing and functioning of the eyes. Asthenopia is a term generally used to describe symptoms related to (prolonged) use of the eyes like ocular fatigue, headache, pain or aching around the eyes, and burning and itchiness of the eyelids. The prevalence of asthenopia during or after work on a computer ranges from 46.3% to 68.5%. Uncorrected or under-corrected refractive error can contribute to the development of asthenopia. A refractive error is an error in the focusing of light by the eye and can lead to reduced visual acuity. There are various possibilities for optical correction of refractive errors including eyeglasses, contact lenses and refractive surgery. To examine the evidence on the effectiveness, safety and applicability of optical correction of refractive error for reducing and preventing eye symptoms in computer users. We searched the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; Embase; Web of Science; and OSH update, all to 20 December 2017. Additionally, we searched trial registries and checked references of included studies. We included randomised controlled trials (RCTs) and quasi-randomised trials of interventions evaluating optical correction for computer workers with refractive error for preventing or treating asthenopia and their effect on health related quality of life. Two authors independently assessed study eligibility and risk of bias, and extracted data. Where appropriate, we combined studies in a meta-analysis. We included eight studies with 381 participants. Three were parallel group RCTs, three were cross-over RCTs and two were quasi-randomised cross-over trials. All studies evaluated eyeglasses, there were no studies that evaluated contact lenses or surgery. Seven studies evaluated computer glasses with at least one focal area for the distance of the computer screen with or without additional focal areas in presbyopic persons. Six studies compared computer
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub
Intelligent error correction method applied on an active pixel sensor based star tracker
Schmidt, Uwe
2005-10-01
Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics
International Nuclear Information System (INIS)
Moss, A.R.L.
2000-01-01
Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of ± 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of ±0.1%T is required to give a UPF uncertainty of ±2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)
Error Correction and Calibration of a Sun Protection Measurement System for Textile Fabrics
Energy Technology Data Exchange (ETDEWEB)
Moss, A.R.L
2000-07-01
Clothing is increasingly being labelled with a Sun Protection Factor number which indicates the protection against sunburn provided by the textile fabric. This Factor is obtained by measuring the transmittance of samples of the fabric in the ultraviolet region (290-400 nm). The accuracy and hence the reliability of the label depends on the accuracy of the measurement. Some sun protection measurement systems quote a transmittance accuracy at 2%T of {+-} 1.5%T. This means a fabric classified under the Australian standard (AS/NZ 4399:1996) with an Ultraviolet Protection Factor (UPF) of 40 would have an uncertainty of +15 or -10. This would not allow classification to the nearest 5, and a UVR protection category of 'excellent protection' might in fact be only 'very good protection'. An accuracy of {+-}0.1%T is required to give a UPF uncertainty of {+-}2.5. The measurement system then does not contribute significantly to the error, and the problems are now limited to sample conditioning, position and consistency. A commercial sun protection measurement system has been developed by Camspec Ltd which used traceable neutral density filters and appropriate design to ensure high accuracy. The effects of small zero offsets are corrected and the effect of the reflectivity of the sample fabric on the integrating sphere efficiency is measured and corrected. Fabric orientation relative to the light patch is considered. Signal stability is ensured by means of a reference beam. Traceable filters also allow wavelength accuracy to be conveniently checked. (author)
Directory of Open Access Journals (Sweden)
Dan Tulpan
2013-01-01
Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
DEFF Research Database (Denmark)
Tamura, Jim; Kobara, Kazukuni; Fathi, Hanane
2010-01-01
A number of lightweight PIR (Private Information Retrieval) schemes have been proposed in recent years. In JWIS2006, Kwon et al. proposed a new scheme (optimized LFCPIR, or OLFCPIR), which aimed at reducing the communication cost of Lipmaa's O(log2 n) PIR(LFCPIR) to O(logn). However in this paper......, we point out a fatal error of overflow contained in OLFCPIR and show how the error can be corrected. Finally, we compare with LFCPIR to show that the communication cost of our corrected OLFCPIR is asymptotically the same as the previous LFCPIR....
DEFF Research Database (Denmark)
Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy
2004-01-01
An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....
Buterakos, Donovan; Throckmorton, Robert E.; Das Sarma, S.
2018-01-01
In addition to magnetic field and electric charge noise adversely affecting spin-qubit operations, performing single-qubit gates on one of multiple coupled singlet-triplet qubits presents a new challenge: crosstalk, which is inevitable (and must be minimized) in any multiqubit quantum computing architecture. We develop a set of dynamically corrected pulse sequences that are designed to cancel the effects of both types of noise (i.e., field and charge) as well as crosstalk to leading order, and provide parameters for these corrected sequences for all 24 of the single-qubit Clifford gates. We then provide an estimate of the error as a function of the noise and capacitive coupling to compare the fidelity of our corrected gates to their uncorrected versions. Dynamical error correction protocols presented in this work are important for the next generation of singlet-triplet qubit devices where coupling among many qubits will become relevant.
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.
2017-11-27
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
Directory of Open Access Journals (Sweden)
Trofimov Ivan D.
2017-01-01
Full Text Available The paper re-examines the “stylized facts” of the balanced growth in developed economies, looking specifically at capital productivity variable. The economic data is obtained from European Commission AMECO database, spanning 1961-2014 period. For a sample of 22 OECD economies, the paper applies univariate LM unit root tests with one or two structural breaks, and estimates error-correction and linear trend models with breaks. It is shown that diverse statistical patterns were present across economies and overall mixed evidence is provided as to the stability of capital productivity and balanced growth in general. Specifically, both upward and downward trends in capital productivity were present, while in several economies mean reversion and random walk patterns were observed. The data and results were largely in line with major theoretical explanations pertaining to capital productivity. With regard to determinants of the capital productivity movements, the structure of capital stock and the prices of capital goods were likely most salient.
Oil price fluctuations and employment in Kern County: A Vector Error Correction approach
International Nuclear Information System (INIS)
Michieka, Nyakundi M.; Gearhart, Richard
2015-01-01
Kern County is one of the country's largest oil producing regions, in which the oil industry employs a significant fraction of the labor force in the county. In this study, the short- and long-run effects of oil price fluctuations on employment in Kern County are investigated using a Vector Error Correction model (VECM). Empirical results over the period 1990:01 to 2015:03 suggest long-run causality running from both WTI and Brent oil prices to employment. No causality is detected in the short-run. Kern County should formulate appropriate policies, which take into account the fact that changes in oil prices have long-term effects on employment rather than short term. - Highlights: • Kern County is California's largest oil producing region. • Historical data has shown increased employment during periods of high oil prices. • We study the short- and long run effects of oil prices on employment in Kern County. • Results suggest long run causality running from WTI and Brent to employment. • No causality is detected in the short run.
Assessment of cassava supply response in Nigeria using vector error correction model (VECM
Directory of Open Access Journals (Sweden)
Obayelu Oluwakemi Adeola
2016-12-01
Full Text Available The response of agricultural commodities to changes in price is an important factor in the success of any reform programme in agricultural sector of Nigeria. The producers of traditional agricultural commodities, such as cassava, face the world market directly. Consequently, the producer price of cassava has become unstable, which is a disincentive for both its production and trade. This study investigated cassava supply response to changes in price. Data collected from FAOSTAT from 1966 to 2010 were analysed using Vector Error Correction Model (VECM approach. The results of the VECM for the estimation of short run adjustment of the variables toward their long run relationship showed a linear deterministic trend in the data and that Area cultivated and own prices jointly explained 74% and 63% of the variation in the Nigeria cassava output in the short run and long-run respectively. Cassava prices (P<0.001 and land cultivated (P<0.1 had positive influence on cassava supply in the short-run. The short-run price elasticity was 0.38 indicating that price policies were effective in the short-run promotion of cassava production in Nigeria. However, in the long-run elasticity cassava was not responsive to price incentives significantly. This suggests that price policies are not effective in the long-run promotion of cassava production in the country owing to instability in governance and government policies.
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.; Pan, B.; Lubineau, Gilles
2017-01-01
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
Quantum Error Correction: Optimal, Robust, or Adaptive? Or, Where is The Quantum Flyball Governor?
Kosut, Robert; Grace, Matthew
2012-02-01
In The Human Use of Human Beings: Cybernetics and Society (1950), Norbert Wiener introduces feedback control in this way: ``This control of a machine on the basis of its actual performance rather than its expected performance is known as feedback ... It is the function of control ... to produce a temporary and local reversal of the normal direction of entropy.'' The classic classroom example of feedback control is the all-mechanical flyball governor used by James Watt in the 18th century to regulate the speed of rotating steam engines. What is it that is so compelling about this apparatus? First, it is easy to understand how it regulates the speed of a rotating steam engine. Secondly, and perhaps more importantly, it is a part of the device itself. A naive observer would not distinguish this mechanical piece from all the rest. So it is natural to ask, where is the all-quantum device which is self regulating, ie, the Quantum Flyball Governor? Is the goal of quantum error correction (QEC) to design such a device? Devloping the computational and mathematical tools to design this device is the topic of this talk.
Daily CT localization for correcting portal errors in the treatment of prostate cancer
International Nuclear Information System (INIS)
Lattanzi, Joseph; McNeely, Shawn; Hanlon, Alexandra; Das, Indra; Schultheiss, Timothy E.; Hanks, Gerald E.
1998-01-01
-Gy baseline CT scans, the isocenter changes were quantified to reflect the contribution of positional (surface marker shifts) error and absolute prostate motion relative to the bony pelvis. The maximum daily A/P shift was 7.3 mm. Motion was less than 5 mm in the remaining patients and the overall mean magnitude change was 2.9 mm. The overall variability was quantified by a pooled standard deviation of 1.7 mm. The maximum lateral shifts were less than 3 mm for all patients. With careful attention to patient positioning, maximal portal placement error was reduced to 3 mm. Conclusion: In our experience, prostate motion after 50 Gy was significantly less than previously reported. This may reflect early physiologic changes due to radiation, which restrict prostate motion. This observation is being tested in a separate study. Intrapatient and overall population variance was minimal. With daily isocenter correction of setup and organ motion errors by CT imaging, PTV margins can be significantly reduced or eliminated. We believe this will facilitate further dose escalation in high-risk patients with minimal risk of increased morbidity. This technique may also be beneficial in low-risk patients by sparing more normal surrounding tissue
Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis
2015-01-01
Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies. PMID:25663845
Nazione, Samantha; Pace, Kristin
2015-01-01
Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
The Effect of Explicit and Implicit Corrective Feedback on Segmental Word-Level Pronunciation Errors
Directory of Open Access Journals (Sweden)
Mohammad Zohrabi
2017-04-01
Full Text Available Over the last few years, the realm of foreign language learning has witnessed an abundance of research concerning the effectiveness of corrective feedback on the acquisition of grammatical features, with the study of other target language subsystems, such as pronunciation, being few and far between. In order to bridge this gap, the present study intended to investigate and compare the immediate and delayed effect of explicit (overt and implicit (covert corrective feedback (CF on treating segmental word-level pronunciation errors committed by adult EFL learners of an institute in Tabriz named ALC. To this end, through a quasi-experimental study and random sampling, three groups were formed, an explicit, an implicit and a control group, each consisting of 20 low proficient EFL learners. Besides, considering the levels that learners were assigned to, based on the institute’s criteria, a Preliminary English Test (PET was administered in order to determine the proficiency level of learners. Having administered the pretest before treatment, to measure the longer-term effect of explicit vs. implicit CF on segmental word-level pronunciation errors, the study included delayed posttests in addition to immediate posttests all of which included reading passages containing 40 problematic words. The collected data were analyzed by ANCOVA and the obtained findings revealed that both explicit and implicit corrective feedback are effective in reducing pronunciation errors showing significant differences between experimental and control groups. Additionally, the outcomes showed that immediate implicit and immediate explicit corrective feedback have similar effects on reduction of pronunciation errors. The same result comes up regarding the delayed effect of explicit feedback in comparison with delayed effect of implicit feedback. However, the delayed effect of explicit and implicit CF lowered comparing to their immediate effect due to time effect. Pedagogically
Bowe, Melissa; Sellers, Tyra P.
2018-01-01
The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…
Decoding linear error-correcting codes up to half the minimum distance with Gröbner bases
Bulygin, S.; Pellikaan, G.R.; Sala, M.; Mora, T.; Perret, L.; Sakata, S.; Traverso, C.
2009-01-01
In this short note we show how one can decode linear error-correcting codes up to half the minimum distance via solving a system of polynomial equations over a finite field. We also explicitly present the reduced Gröbner basis for the system considered.
Zimmer, Patricia Moore
2001-01-01
Describes the author's experiences directing a play translated and acted in Korean. Notes that she had to get familiar with the sound of the language spoken fluently, to see how an actor's thought is discerned when the verbal language is not understood. Concludes that so much of understanding and communication unfolds in ways other than with…
International Nuclear Information System (INIS)
Li, Ke; Lin, Boqiang
2016-01-01
Enhancing energy technology innovation performance, which is widely measured by energy technology patents through energy technology research and development (R&D) activities, is a fundamental way to implement energy conservation and emission abatement. This study analyzes the effects of R&D investment activities, economic growth, and energy price on energy technology patents in 30 provinces of China over the period 1999–2013. Several unit root tests indicate that all the above variables are generated by panel unit root processes, and a panel cointegration model is confirmed among the variables. In order to ensure the consistency of the estimators, the Fully-Modified OLS (FMOLS) method is adopted, and the results indicate that R&D investment activities and economic growth have positive effects on energy technology patents while energy price has a negative effect. However, the panel error correction models indicate that the cointegration relationship helps to promote economic growth, but it reduces R&D investment and energy price in the short term. Therefore, market-oriented measures including financial support and technical transformation policies for the development of low-carbon energy technologies, an effective energy price mechanism, especially the targeted fossil-fuel subsidies and their die away mode are vital in promoting China's energy technology innovation. - Highlights: • Energy technology patents in China are analyzed. • Relationship between energy patents and funds for R&D activities are analyzed. • China's energy price system hinders energy technology innovation. • Some important implications for China's energy technology policy are discussed. • A panel cointegration model with FMOLS estimator is used.
Directory of Open Access Journals (Sweden)
Tshepo S. Masipa
2018-05-01
Full Text Available Orientation: From the Growth, Employment and Redistribution (GEAR strategy of 1996 to the currently implemented National Development Plan (NDP, the need to attract more foreign investors and promote exports in pursuit of economic growth and job creation has been emphasised. Research purpose: It is within this context that the purpose of this article was to determine the nexus between foreign direct investment (FDI inflows and economic growth from 1980 to 2014. Research design, approach and method: The vector error correction model is employed to determine and estimate the long-run relationship between the variables in the model. Main findings: From the findings, it was found that economic growth shares a positive relationship with both FDIs and the real effective exchange rate, while sharing a negative long-run relationship with government expenditure. Practical and managerial implications: The article contributes towards the on going debates on the impact of FDIs on economic growth and job creation in the recipient countries. Accordingly, its findings reinforce the importance of attracting FDIs in South Africa and to what extent they affect economic growth and employment. Contribution or value-add: From a policy perspective, the attraction of foreign investors must target sources that can create jobs and boost the South African economy. It is vital for the government to strengthen its machinery to fight corruption to create an environment conducive for foreign investors. Hence, this article suggests that South Africa’s capacity to grow and create jobs also depends on the country’s performance to enhance gross domestic product growth and attract more FDIs. The attraction of FDIs should, however, not be seen as an end in itself but also as a means of supporting other initiatives such as eradicating poverty and inequalities in South Africa.
Directory of Open Access Journals (Sweden)
Joseph D. Monaco
2011-09-01
Full Text Available Mammals navigate by integrating self-motion signals (‘path integration’ and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid cells demonstrate a phase relationship with the local theta (6–10 Hz rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of ‘partial remapping’ responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.
International Nuclear Information System (INIS)
Nogueira, J; Lecuona, A; Nauri, S; Legrand, M; Rodríguez, P A
2009-01-01
PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ∼0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes
Nogueira, J.; Lecuona, A.; Nauri, S.; Legrand, M.; Rodríguez, P. A.
2009-07-01
PIV (particle image velocimetry) is a measurement technique with growing application to the study of complex flows with relevance to industry. This work is focused on the assessment of some significant PIV measurement errors. In particular, procedures are proposed for estimating, and sometimes correcting, errors coming from the sensor geometry and performance, namely peak-locking and contemporary CCD camera read-out errors. Although the procedures are of general application to PIV, they are applied to a particular real case, giving an example of the methodology steps and the improvement in results that can be obtained. This real case corresponds to an ensemble of hot high-speed coaxial jets, representative of the civil transport aircraft propulsion system using turbofan engines. Errors of ~0.1 pixels displacements have been assessed. This means 10% of the measured magnitude at many points. These results allow the uncertainty interval associated with the measurement to be provided and, under some circumstances, the correction of some of the bias components of the errors. The detection of conditions where the peak-locking error has a period of 2 pixels instead of the classical 1 pixel has been made possible using these procedures. In addition to the increased worth of the measurement, the uncertainty assessment is of interest for the validation of CFD codes.
Directory of Open Access Journals (Sweden)
Guo Xiao-Mao
2010-10-01
Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10
International Nuclear Information System (INIS)
Cai, Gang; Hu, Wei-Gang; Chen, Jia-Yi; Yu, Xiao-Li; Pan, Zi-Qiang; Yang, Zhao-Zhi; Guo, Xiao-Mao; Shao, Zhi-Min; Jiang, Guo-Liang
2010-01-01
The cone beam CT (CBCT) guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF) errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT) guided accelerated partial breast irradiation (APBI). Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR), 3.1 mm and 2.3 mm in the superior-inferior (SI), and 2.3 mm and 2.0 mm in the anterior-posterior (AP) directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10.1 mm and 12.7 mm in the AP direction
International Nuclear Information System (INIS)
Hirose, Yoshinori; Tomita, Tsuneyuki; Kitsuda, Kenji; Notogawa, Takuya; Miki, Katsuhito; Nakamura, Mitsuhiro; Nakamura, Kiyonao; Ishigaki, Takashi
2014-01-01
We investigated the effect of different set-up error corrections on dose-volume metrics in intensity-modulated radiotherapy (IMRT) for prostate cancer under different planning target volume (PTV) margin settings using cone-beam computed tomography (CBCT) images. A total of 30 consecutive patients who underwent IMRT for prostate cancer were retrospectively analysed, and 7-14 CBCT datasets were acquired per patient. Interfractional variations in dose-volume metrics were evaluated under six different set-up error corrections, including tattoo, bony anatomy, and four different target matching groups. Set-up errors were incorporated into planning the isocenter position, and dose distributions were recalculated on CBCT images. These processes were repeated under two different PTV margin settings. In the on-line bony anatomy matching groups, systematic error (Σ) was 0.3 mm, 1.4 mm, and 0.3 mm in the left-right, anterior-posterior (AP), and superior-inferior directions, respectively. Σ in three successive off-line target matchings was finally comparable with that in the on-line bony anatomy matching in the AP direction. Although doses to the rectum and bladder wall were reduced for a small PTV margin, averaged reductions in the volume receiving 100% of the prescription dose from planning were within 2.5% under all PTV margin settings for all correction groups, with the exception of the tattoo set-up error correction only (≥ 5.0%). Analysis of variance showed no significant difference between on-line bony anatomy matching and target matching. While variations between the planned and delivered doses were smallest when target matching was applied, the use of bony anatomy matching still ensured the planned doses. (author)
Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano
2017-01-01
Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
Analysis of measured data of human body based on error correcting frequency
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the
International Nuclear Information System (INIS)
Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.
2002-01-01
Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS
Directory of Open Access Journals (Sweden)
Mohamad Iwan
2005-06-01
This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
International Nuclear Information System (INIS)
Kim, Isaac H.
2011-01-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
Directory of Open Access Journals (Sweden)
2014-01-01
Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].
Cohen, Aaron M.
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...
Directory of Open Access Journals (Sweden)
Klenina Anastasiya Aleksandrovna
2015-12-01
Full Text Available The authors have made a mistake in calculating the volume of eggs in the clutches of snake family Natrix. In this article we correct the error. As a result, it was revealed, that the volume of eggs positively correlates with a female length and its mass, as well as with the quantity of eggs in the clutches. There is a positive correlation between the characteristics of newborn snakes (length and mass and the volume of eggs, from which they hatched.
Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A
2013-09-01
Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.
Batistatou, Evridiki; McNamee, Roseanne
2012-12-10
It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
International Nuclear Information System (INIS)
Ciofini, Ilaria; Adamo, Carlo; Chermette, Henry
2005-01-01
Corrections to the self-interaction error which is rooted in all standard exchange-correlation functionals in the density functional theory (DFT) have become the object of an increasing interest. After an introduction reminding the origin of the self-interaction error in the DFT formalism, and a brief review of the self-interaction free approximations, we present a simple, yet effective, self-consistent method to correct this error. The model is based on an average density self-interaction correction (ADSIC), where both exchange and Coulomb contributions are screened by a fraction of the electron density. The ansatz on which the method is built makes it particularly appealing, due to its simplicity and its favorable scaling with the size of the system. We have tested the ADSIC approach on one of the classical pathological problem for density functional theory: the direct estimation of the ionization potential from orbital eigenvalues. A large set of different chemical systems, ranging from simple atoms to large fullerenes, has been considered as test cases. Our results show that the ADSIC approach provides good numerical values for all the molecular systems, the agreement with the experimental values increasing, due to its average ansatz, with the size (conjugation) of the systems
International Nuclear Information System (INIS)
Salas, P.J.; Sanz, A.L.
2004-01-01
In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10 -4 ≤ε≤10 -2 for memory errors and 3x10 -5 ≤γ/7≤10 -2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits
DEFF Research Database (Denmark)
Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin
2016-01-01
[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
The correction of linear lattice gradient errors using an AC dipole
Energy Technology Data Exchange (ETDEWEB)
Wang,G.; Bai, M.; Litvinenko, V.N.; Satogata, T.
2009-05-04
Precise measurement of optics from coherent betatron oscillations driven by ac dipoles have been demonstrated at RHIC and the Tevatron. For RHIC, the observed rms beta-beat is about 10%. Reduction of beta-beating is an essential component of performance optimization at high energy colliders. A scheme of optics correction was developed and tested in the RHIC 2008 run, using ac dipole optics for measurement and a few adjustable trim quadruples for correction. In this scheme, we first calculate the phase response matrix from the. measured phase advance, and then apply singular value decomposition (SVD) algorithm to the phase response matrix to find correction quadruple strengths. We present both simulation and some preliminary experimental results of this correction.
Direct focusing error correction with ring-wide TBT beam position data
International Nuclear Information System (INIS)
Yang, M.J.
2011-01-01
Turn-By-Turn (TBT) betatron oscillation data is a very powerful tool in studying machine optics. Hundreds and thousands of turns of free oscillations are taken in just few tens of milliseconds. With beam covering all positions and angles at every location TBT data can be used to diagnose focusing errors almost instantly. This paper describes a new approach that observes focusing error collectively over all available TBT data to find the optimized quadrupole strength, one location at a time. Example will be shown and other issues will be discussed. The procedure presented clearly has helped to reduce overall deviations significantly, with relative ease. Sextupoles, being a permanent feature of the ring, will need to be incorporated into the model. While cumulative effect from all sextupoles around the ring may be negligible on turn-to-turn basis it is not so in this transfer line analysis. It should be noted that this procedure is not limited to looking for quadrupole errors. By modifying the target of minimization it could in principle be used to look for skew quadrupole errors and sextupole errors as well.
Correcting groove error in gratings ruled on a 500-mm ruling engine using interferometric control.
Mi, Xiaotao; Yu, Haili; Yu, Hongzhu; Zhang, Shanwen; Li, Xiaotian; Yao, Xuefeng; Qi, Xiangdong; Bayinhedhig; Wan, Qiuhua
2017-07-20
Groove error is one of the most important factors affecting grating quality and spectral performance. To reduce groove error, we propose a new ruling-tool carriage system based on aerostatic guideways. We design a new blank carriage system with double piezoelectric actuators. We also propose a completely closed-loop servo-control system with a new optical measurement system that can control the position of the diamond relative to the blank. To evaluate our proposed methods, we produced several gratings, including an echelle grating with 79 grooves/mm, a grating with 768 grooves/mm, and a high-density grating with 6000 grooves/mm. The results show that our methods effectively reduce groove error in ruled gratings.
Directory of Open Access Journals (Sweden)
Masson Lindsey F
2011-10-01
Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of
International Nuclear Information System (INIS)
Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy
2006-01-01
We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Correcting errors in a quantum gate with pushed ions via optimal control
International Nuclear Information System (INIS)
Poulsen, Uffe V.; Sklarz, Shlomo; Tannor, David; Calarco, Tommaso
2010-01-01
We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high fidelity compatible with scalable fault-tolerant quantum computing.
Correcting errors in a quantum gate with pushed ions via optimal control
DEFF Research Database (Denmark)
Poulsen, Uffe Vestergaard; Sklarz, Shlomo; Tannor, David
2010-01-01
We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types...... of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high...
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
Directory of Open Access Journals (Sweden)
Roosaleh Laksono T.Y.
2016-04-01
Full Text Available Abstract. This study aims to analyze the effect of interest rate, inflation, and national income on rupiah exchange rate against dollar both long-term balanced relationship and short-run balance of empirical data from 1980-2015 (36 years using secondary data. The research method used is multiple linear regression methods of OLS. This research method used to approach with cointegration and error correction model (ECM by previously passing some other stages of statistical testing. The results of the study with cointegration (Johansen Cointegration test indicate that all the independent variables (inflation, national income, and interest rate and the non-free variable (exchange rate have a long-term equilibrium relationship, as evidenced by the test results Where the trace statistic value of 102.1727 is much greater than the critical value (5% of 47.85613. In addition, the result of Maximum Eigenvalue Statistic is the result of 36.7908 greater than the critical value of 5%. 27,584434. While the results of the model error correction test (ECM that only variable inflation, interest rates and residual significant, while the variable national income is not significant. This means that the inflation and interest rate variables have a short-run relationship to the exchange rate, it is seen from the Probability (Prob. Value of each variable is 0,05 (5%, besides the residual coefficient on the ECM test result is -0,732447, it shows that error correction term is 73,24% and significant. Keywords: Interest rate; Nasional income; Inflation; Exchange rate; Cointegration; Error Correction Model. Abstrak. Penelitian ini bertujuan untuk menganalisa pengaruh Suku bunga, inflasi, dan Pendapatan Nasional terhadap nilai tukar rupiah terhadap dollar baik hubungan keseimbangan jangka panjang maupun keseimbangan jangka pendek data empiris tahun 1980-2015 (36 tahun dengan menggunakan data sekunder. Metode penelitian yang digunakan adalah regresi
Ridderikhoff, A.; Peper, C.E.; Beek, P.J.
2007-01-01
Although previous studies indicated that the stability properties of interlimb coordination largely result from the integrated timing of efferent signals to both limbs, they also depend on afference-based interactions. In the present study, we examined contributions of afference-based error
Learning Correct Responses and Errors in the Hebb Repetition Effect: Two Faces of the Same Coin
Couture, Mathieu; Lafond, Daniel; Tremblay, Sebastien
2008-01-01
In a serial recall task, the "Hebb repetition effect" occurs when recall performance improves for a sequence repeated throughout the experimental session. This phenomenon has been replicated many times. Nevertheless, such cumulative learning seldom leads to perfect recall of the whole sequence, and errors persist. Here the authors report…
Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)
2001-01-01
A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.
Correction of longitudinal errors in accelerators for heavy-ion fusion
International Nuclear Information System (INIS)
Sharp, W.M.; Callahan, D.A.; Barnard, J.J.; Langdon, A.B.; Fessenden, T.J.
1993-01-01
Longitudinal space-charge waves develop on a heavy-ion inertial-fusion pulse from initial mismatches or from inappropriately timed or shaped accelerating voltages. Without correction, waves moving backward along the beam can grow due to the interaction with their resistivity retarded image fields, eventually degrading the longitudinal emittance. A simple correction algorithm is presented here that uses a time-dependent axial electric field to reverse the direction of backward-moving waves. The image fields then damp these forward-moving waves. The method is demonstrated by fluid simulations of an idealized inertial-fusion driver, and practical problems in implementing the algorithm are discussed
Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi
2015-10-01
One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
The Effect of Direct and Indirect Corrective Feedback on Iranian EFL Learners' Spelling Errors
Ghandi, Maryam; Maghsoudi, Mojtaba
2014-01-01
The aim of the current study was to investigate the impact of indirect corrective feedback on promoting Iranian high school students' spelling accuracy in English (as a foreign language). It compared the effect of direct feedback with indirect feedback on students' written work dictated by their teacher from Chicken Soup for the Mother and…
Correcting electrode modelling errors in EIT on realistic 3D head models.
Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo
2015-12-01
Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.
Energy Technology Data Exchange (ETDEWEB)
Van Deventer, G.; Thomson, J.; Graham, L.S.; Thomasson, D.; Meyer, J.H.
1983-03-01
The study was undertaken to validate phantom-derived corrections for errors in collimation due to septal penetration or scatter, which vary with the size of the gastric region of interest (ROI). Six volunteers received 495 ml of 20% glucose labeled with both In-113m DTPA and Tc-99m DTPA. Gastric emptying of each nuclide was monitored by gamma camera as well as by periodic removal and reinstillation of the meal through a gastric tube. Serial aspirates from the gastric tube confirmed parallel emptying of In-113m and Tc-99m, but analyses of gamma-camera data yielded parallel emptying only when adequate corrections were made for errors in collimation. Analyses of ratios of gastric counts from anterior to posterior, as well as analyses of peak-to-scatter ratios, revealed only small, insignificant anteroposterior movement of the tracers within the stomach during emptying. Accordingly, there was no significant improvement in the camera data when corrections were made for attenuation with intragastric depth.
Practical error estimates for Reynolds' lubrication approximation and its higher order corrections
Energy Technology Data Exchange (ETDEWEB)
Wilkening, Jon
2008-12-10
Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
GEOS-C altimeter attitude bias error correction. [gate-tracking radar
Marini, J. W.
1974-01-01
A pulse-limited split-gate-tracking radar altimeter was flown on Skylab and will be used aboard GEOS-C. If such an altimeter were to employ a hypothetical isotropic antenna, the altimeter output would be independent of spacecraft orientation. To reduce power requirements the gain of the altimeter antenna proposed is increased to the point where its beamwidth is only a few degrees. The gain of the antenna consequently varies somewhat over the pulse-limited illuminated region of the ocean below the altimeter, and the altimeter output varies with antenna orientation. The error introduced into the altimeter data is modeled empirically, but close agreements with the expected errors was not realized. The attitude error effects expected with the GEOS-C altimeter are modelled using a form suggested by an analytical derivation. The treatment is restricted to the case of a relatively smooth sea, where the height of the ocean waves are small relative to the spatial length (pulse duration times speed of light) of the transmitted pulse.
Semiparametric modeling: Correcting low-dimensional model error in parametric models
International Nuclear Information System (INIS)
Berry, Tyrus; Harlim, John
2016-01-01
In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.
Mayer, István; Bakó, Imre
2017-05-09
The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.
DEFF Research Database (Denmark)
Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo
2012-01-01
We consider Distributed Video Coding (DVC) in presence of communication errors. First, we present DVC side information generation based on a new method of optical flow driven frame interpolation, where a highly optimized TV-L1 algorithm is used for the flow calculations and combine three flows....... Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three...... flows are combined in one solution. The proposed frame interpolation method consistently outperforms an overlapped block motion compensation scheme and a previous TV-L1 optical flow frame interpolation method with an average PSNR improvement of 1.3 dB and 2.3 dB respectively. For a GOP size of 2...
Correction of errors in scale values for magnetic elements for Helsinki
Directory of Open Access Journals (Sweden)
L. Svalgaard
2014-06-01
Full Text Available Using several lines of evidence we show that the scale values of the geomagnetic variometers operating in Helsinki in the 19th century were not constant throughout the years of operation 1844–1897. Specifically, the adopted scale value of the horizontal force variometer appears to be too low by ~ 30% during the years 1866–1874.5 and the adopted scale value of the declination variometer appears to be too low by a factor of ~ 2 during the interval 1885.8–1887.5. Reconstructing the heliospheric magnetic field strength from geomagnetic data has reached a stage where a reliable reconstruction is possible using even just a single geomagnetic data set of hourly or daily values. Before such reconstructions can be accepted as reliable, the underlying data must be calibrated correctly. It is thus mandatory that the Helsinki data be corrected. Such correction has been satisfactorily carried out and the HMF strength is now well constrained back to 1845.
Directory of Open Access Journals (Sweden)
PALIU – POPA LUCIA
2017-12-01
Full Text Available There are still different views on the intangibility of the opening balance sheet at global level in the process of convergence and accounting harmonization. Fnding a total difference between the Anglo-Saxon accounting system and that of the Western European continental influence, in the sense that the former is less rigid in regard with the application of the principle of intangibility, whereas that of mainland inspiration apply the provisions of this principle in its entirety. Looking from this perspective and taking into account the major importance of the financial statements that are intended to provide information for all categories of users, ie both for managers and users external to the entity whose position does not allow them to request specific reports, we considered useful to conduct a study aimed at correcting the errors in the context of compliance with the opening balance sheet intangibility principle versus the need to adjust the comparative information on the financial position, financial performance and change in the financial position generated by the correction of the errors in the previous years. In this regard, we will perform a comparative analysis of the application of the intangibility principle both in the two major accounting systems and at international level and we will approach issues related to the correction of the errors in terms of the main differences between the provisions of the continental accounting regulations (represented by the European and national ones in our approach, Anglo-Saxon and those of the international referential on the opening balance sheet intangibility.
Directory of Open Access Journals (Sweden)
Benoit Macq
2008-07-01
Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.
Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook
2012-11-20
A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Kamiya, Y.; Katoh, M.; Honjo, I.
1987-01-01
A future ring with a low emittance and large circumference, specifically dedicated to a synchrotron light source, will have a large chromaticity, so that it is important to employ a sophisticated sextupole correction as well as the design of linear lattice to obtain the stable beam. The authors tried a method of sextupole correction for a lattice with a large chromaticity and small dispersion function. In such a lattice the sextupole magnets are obliged to become large in strength to compensate the chromaticity. Then the nonlinear effects of the sextupole magnets will become more serious than their chromatic effects. Furthermore, a ring with strong quadrupole magnets to get a very small emittance and with strong sextupole magnets to compensate the generated chromaticity will be very sensitive to their magnetic errors. The authors also present simple formulae to evaluate the effects on the beam parameters. The details will appear in a KEK Report
Nandi, Prithwish Kumar; Valsakumar, M C; Chandra, Sharat; Sahu, H K; Sundar, C S
2010-09-01
We calculate properties like equilibrium lattice parameter, bulk modulus and monovacancy formation energy for nickel (Ni), iron (Fe) and chromium (Cr) using Kohn-Sham density functional theory (DFT). We compare the relative performance of local density approximation (LDA) and generalized gradient approximation (GGA) for predicting such physical properties for these metals. We also make a relative study between two different flavors of GGA exchange correlation functional, namely PW91 and PBE. These calculations show that there is a discrepancy between DFT calculations and experimental data. In order to understand this discrepancy in the calculation of vacancy formation energy, we introduce a correction for the surface intrinsic error corresponding to an exchange correlation functional using the scheme implemented by Mattsson et al (2006 Phys. Rev. B 73 195123) and compare the effectiveness of the correction scheme for Al and the 3d transition metals.
Numerical Predictions of Static-Pressure-Error Corrections for a Modified T-38C Aircraft
2014-12-15
but the more modern work of Latif et al . [11] demonstrated that compensated Pitot-static probes can be simulated accurately for subsonic and...what was originally estimated from CFD simulations in Bhamidipati et al . [3] by extracting the static-pressure error in front of the production probe...Aerodynamically Compensating Pitot Tube,” Journal of Aircraft, Vol. 25, No. 6, 1988, pp. 544–547. doi:10.2514/3.45620 [11] Latif , A., Masud, J., Sheikh, S. R., and
Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.
2017-12-01
The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.
Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems
DEFF Research Database (Denmark)
Li, Bomin
, a denser WDM grid changes the shape of the BER curve based on the analysis of the experimental results, which requires a stronger FEC code. Furthermore, a proof-of-the-concept hardware implementation is presented. The tradeoff between the code length, the CG and the complexity requires more consideration......-complexity low-power-consumption FEC hardware implementation plays an important role in the next generation energy efficient networks. Thirdly, a joint research is required for FEC integrated applications as the error distribution in channels relies on many factors such as non-linearity in long distance optical...... and their associated experimental demonstration and hardware implementation. The demonstrated high CG, flexibility, robustness and scalability reveal the important role of FEC techniques in the next generation high-speed, high-capacity, high performance and energy-efficient fiber-optic data transmission networks....
Energy Technology Data Exchange (ETDEWEB)
Young, M.; Antonarakis, S.E. [Univ. of Geneva (Switzerland); Inaba, Hiroshi [Tokyo Medical College (Japan)] [and others
1997-03-01
Although the molecular defect in patients in a Japanese family with mild to moderately severe hemophilia A was a deletion of a single nucleotide T within an A{sub 8}TA{sub 2} sequence of exon 14 of the factor VIII gene, the severity of the clinical phenotype did not correspond to that expected of a frameshift mutation. A small amount of functional factor VIII protein was detected in the patient`s plasma. Analysis of DNA and RNA molecules from normal and affected individuals and in vitro transcription/translation suggested a partial correction of the molecular defect, because of the following: (i) DNA replication/RNA transcription errors resulting in restoration of the reading frame and/or (ii) {open_quotes}ribosomal frameshifting{close_quotes} resulting in the production of normal factor VIII polypeptide and, thus, in a milder than expected hemophilia A. All of these mechanisms probably were promoted by the longer run of adenines, A{sub 10} instead of A{sub 8}TA{sub 2}, after the delT. Errors in the complex steps of gene expression therefore may partially correct a severe frameshift defect and ameliorate an expected severe phenotype. 36 refs., 6 figs.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
Feedback correction of injection errors using digital signal-processing techniques
Directory of Open Access Journals (Sweden)
N. S. Sereno
2007-01-01
Full Text Available Efficient transfer of electron beams from one accelerator to another is important for 3rd-generation light sources that operate using top-up. In top-up mode, a constant amount of charge is injected at regular intervals into the storage ring to replenish beam lost primarily due to Touschek scattering. Top-up therefore requires that the complex of injector accelerators that fill the storage ring transport beam with a minimum amount of loss. Injection can be a source of significant beam loss if not carefully controlled. In this note we describe a method of processing injection transient signals produced by beam-position monitors and using the processed data in feedback. Feedback control using the technique described here has been incorporated in the Advanced Photon Source (APS booster synchrotron to correct injection transients.
A real-time error-free color-correction facility for digital consumers
Shaw, Rodney
2008-01-01
It has been well known since the earliest days of color photography that color-balance in general, and facial reproduction (flesh tones) in particular, are of dominant interest to the consumer, and significant research resources have been expended in satisfying this need. The general problem is a difficult one, spanning the factors that govern perception and personal preference, the physics and chemistry of color reproduction, as well as wide field of color measurement specification, and analysis. However, with the advent of digital photography and its widespread acceptance in the consumer market, and with the possibility of a much greater degree of individual control over color reproduction, the field is taking on a new consumer-driven impetus, and the provision of user facilities for preferred color choice now constitutes an intense field of research. In addition, due to the conveniences of digital technology, the collection of large data bases and statistics relating to individual color preferences have now become a relatively straightforward operation. Using a consumer preference approach of this type, we have developed a user-friendly facility whereby unskilled consumers may manipulate the color of their personal digital images according to their preferred choice. By virtue of its ease of operation and the real-time nature of the color-correction transforms, this facility can readily be inserted anywhere a consumer interacts with a digital image, from camera, printer, or scanner, to web or photo-kiosk. Here the underlying scientific principles are explored in detail, and these are related to the practical color-preference outcomes. Examples are given of the application to the correction of images with unsatisfactory color balance, and especially to flesh tones and faces, and the nature of the consumer controls and their corresponding image transformations are explored.
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.
Directory of Open Access Journals (Sweden)
Xiao-zhe Bai
2017-01-01
Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.
Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent
2016-04-01
Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2008-07-01
We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.
Directory of Open Access Journals (Sweden)
Shih-Tsun Chang
2015-01-01
Full Text Available Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV], were different among soft tennis adolescent athletes with normal vision (Group A, with refractive error and corrected with (Group B and without eyeglasses (Group C. Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse to 10 (best using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021. PV displayed significant difference among the three study groups (P = 0.0044. There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.
Correction of thickness measurement errors for two adjacent sheet structures in MR images
International Nuclear Information System (INIS)
Cheng Yuanzhi; Wang Shuguo; Sato, Yoshinobu; Nishii, Takashi; Tamura, Shinichi
2007-01-01
We present a new method for measuring the thickness of two adjacent sheet structures in MR images. In the hip joint, in which the femoral and acetabular cartilages are adjacent to each other, a conventional measurement technique based on the second derivative zero crossings (called the zero-crossings method) can introduce large underestimation errors in measurements of cartilage thickness. In this study, we have developed a model-based approach for accurate thickness measurement. We model the imaging process for two adjacent sheet structures, which simulate the two articular cartilages in the hip joint. This model can be used to predict the shape of the intensity profile along the sheet normal orientation. Using an optimization technique, the model parameters are adjusted to minimize the differences between the predicted intensity profile and the actual intensity profiles observed in the MR data. The set of model parameters that minimize the difference between the model and the MR data yield the thickness estimation. Using three phantoms and one normal cadaveric specimen, the usefulness of the new model-based method is demonstrated by comparing the model-based results with the results generated using the zero-crossings method. (author)
Energy Technology Data Exchange (ETDEWEB)
McGraw R.
2012-03-01
Moment methods are finding increasing usage for simulations of particle population balance in box models and in more complex flows including two-phase flows. These highly efficient methods have nevertheless had little impact to date for multi-moment representation of aerosols and clouds in atmospheric models. There are evidently two reasons for this: First, atmospheric models, especially if the goal is to simulate climate, tend to be extremely complex and take many man-years to develop. Thus there is considerable inertia to the implementation of novel approaches. Second, and more fundamental, the nonlinear transport algorithms designed to reduce numerical diffusion during advection of various species (tracers) from cell to cell, in the typically coarse grid arrays of these models, can and occasionally do fail to preserve correlations between the moments. Other correlated tracers such as isotopic abundances, composition of aerosol mixtures, hydrometeor phase, etc., are subject to this same fate. In the case of moments, this loss of correlation can and occasionally does give rise to unphysical moment sets. When this happens the simulation can come to a halt. Following a brief description and review of moment methods, the goal of this paper is to present two new approaches that both test moment sequences for validity and correct them when they fail. The new approaches work on individual grid cells without requiring stored information from previous time-steps or neighboring cells.
Energy Technology Data Exchange (ETDEWEB)
Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu
2017-04-01
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated under three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.
Development of a new error field correction coil (C-coil) for DIII-D
International Nuclear Information System (INIS)
Robinson, J.I.; Scoville, J.T.
1995-12-01
The C-coil recently installed on the DIII-D tokamak was developed to reduce the error fields created by imperfections in the location and geometry of the existing coils used to confine, heat, and shape the plasma. First results from C-coil experiments include stable operation in a 1.6 MA plasma with a density less than 1.0 x 10 13 cm -3 , nearly a factor of three lower density than that achievable without the C-coil. The C-coil has also been used in magnetic braking of the plasma rotation and high energy particle confinement experiments. The C-coil system consists of six individual saddle coils, each 60 degree wide toroidally, spanning the midplane of the vessel with a vertical height of 1.6 m. The coils are located at a major radius of 3.2 m, just outside of the toroidal field coils. The actual shape and geometry of each coil section varied somewhat from the nominal dimensions due to the large number of obstructions to the desired coil path around the already crowded tokamak. Each coil section consists of four turns of 750 MCM insulated copper cable banded with stainless steel straps within the web of a 3 in. x 3 in. stainless steel angle frame. The C-coil structure was designed to resist peak transient radial forces (up to 1,800 Nm) exerted on the coil by the toroidal and ploidal fields. The coil frames were supported from existing poloidal field coil case brackets, coil studs, and various other structures on the tokamak
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Is spoken Danish less intelligible than Swedish?
Gooskens, Charlotte; van Heuven, Vincent J.; van Bezooijen, Renee; Pacilly, Jos J. A.
2010-01-01
The most straightforward way to explain why Danes understand spoken Swedish relatively better than Swedes understand spoken Danish would be that spoken Danish is intrinsically a more difficult language to understand than spoken Swedish. We discuss circumstantial evidence suggesting that Danish is
2015-03-01
In the January 2015 issue of Cyberpsychology, Behavior, and Social Networking (vol. 18, no. 1, pp. 3–7), the article "Individual Differences in Cyber Security Behaviors: An Examination of Who Is Sharing Passwords." by Prof. Monica Whitty et al., has an error in wording in the abstract. The sentence in question was originally printed as: Contrary to our hypotheses, we found older people and individuals who score high on self-monitoring were more likely to share passwords. It should read: Contrary to our hypotheses, we found younger people and individuals who score high on self-monitoring were more likely to share passwords. The authors wish to apologize for the error.
Buset, Jonathan M; El-Sahn, Ziad A; Plant, David V
2012-06-18
We demonstrate an improved overlapped-subcarrier multiplexed (O-SCM) WDM PON architecture transmitting over a single feeder using cost sensitive intensity modulation/direct detection transceivers, data re-modulation and simple electronics. Incorporating electronic equalization and Reed-Solomon forward-error correction codes helps to overcome the bandwidth limitation of a remotely seeded reflective semiconductor optical amplifier (RSOA)-based ONU transmitter. The O-SCM architecture yields greater spectral efficiency and higher bit rates than many other SCM techniques while maintaining resilience to upstream impairments. We demonstrate full-duplex 5 Gb/s transmission over 20 km and analyze BER performance as a function of transmitted and received power. The architecture provides flexibility to network operators by relaxing common design constraints and enabling full-duplex operation at BER ∼ 10(-10) over a wide range of OLT launch powers from 3.5 to 8 dBm.
Yuma, Yoshikazu
2010-08-01
This research examined the effect of prison population densities (PPD) on inmate-inmate prison violence rates (PVR) in Japan using one-year-interval time-series data (1972-2006). Cointegration regressions revealed a long-run equilibrium relationship between PPD and PVR. PPD had a significant and increasing effect on PVR in the long-term. Error correction models showed that in the short-term, the effect of PPD was significant and positive on PVR, even after controlling for the effects of the proportions of males, age younger than 30 years, less than one-year incarceration, and prisoner/staff ratio. The results were discussed in regard to (a) differences between Japanese prisons and prisons in the United States, and (b) methodological problems found in previous research.
International Nuclear Information System (INIS)
Paz, Juan Pablo; Roncaglia, Augusto Jose; Saraceno, Marcos
2005-01-01
We analyze and further develop a method to represent the quantum state of a system of n qubits in a phase-space grid of NxN points (where N=2 n ). The method, which was recently proposed by Wootters and co-workers (Gibbons et al., Phys. Rev. A 70, 062101 (2004).), is based on the use of the elements of the finite field GF(2 n ) to label the phase-space axes. We present a self-contained overview of the method, we give insights into some of its features, and we apply it to investigate problems which are of interest for quantum-information theory: We analyze the phase-space representation of stabilizer states and quantum error-correction codes and present a phase-space solution to the so-called mean king problem
Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.
1980-01-01
The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.
Directory of Open Access Journals (Sweden)
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
Maggioni, V.; Massari, C.; Ciabatta, L.; Brocca, L.
2016-12-01
Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning, and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. However, uncertainties in the SM2RAIN product are still not well known and could represent a limitation in utilizing this dataset for hydrological applications. Therefore, quantifying the uncertainty associated with SM2RAIN is necessary for enabling its use. The study is conducted over the Italian territory for a 5-yr period (2010-2014). A number of satellite precipitation error properties, typically used in error modeling, are investigated and include probability of detection, false alarm rates, missed events, spatial correlation of the error, and hit biases. After this preliminary uncertainty analysis, the potential of applying the stochastic rainfall error model SREM2D to correct SM2RAIN and to improve its performance in hydrologic applications is investigated. The use of SREM2D for
Energy Technology Data Exchange (ETDEWEB)
Zhang, JY [Cancer Hospital of Shantou University Medical College, Shantou, Guangdong (China); Hong, DL [The First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong (China)
2016-06-15
Purpose: The purpose of this study is to investigate the patient set-up error and interfraction target coverage in cervical cancer using image-guided adaptive radiotherapy (IGART) with cone-beam computed tomography (CBCT). Methods: Twenty cervical cancer patients undergoing intensity modulated radiotherapy (IMRT) were randomly selected. All patients were matched to the isocenter using laser with the skin markers. Three dimensional CBCT projections were acquired by the Varian Truebeam treatment system. Set-up errors were evaluated by radiation oncologists, after CBCT correction. The clinical target volume (CTV) was delineated on each CBCT, and the planning target volume (PTV) coverage of each CBCT-CTVs was analyzed. Results: A total of 152 CBCT scans were acquired from twenty cervical cancer patients, the mean set-up errors in the longitudinal, vertical, and lateral direction were 3.57, 2.74 and 2.5mm respectively, without CBCT corrections. After corrections, these were decreased to 1.83, 1.44 and 0.97mm. For the target coverage, CBCT-CTV coverage without CBCT correction was 94% (143/152), and 98% (149/152) with correction. Conclusion: Use of CBCT verfication to measure patient setup errors could be applied to improve the treatment accuracy. In addition, the set-up error corrections significantly improve the CTV coverage for cervical cancer patients.
Fadlallah, Ali; Dirani, Ali; Chelala, Elias; Antonios, Rafic; Cherfan, George; Jarade, Elias
2014-10-01
To evaluate the safety and clinical outcome of combined non-topography-guided photorefractive keratectomy (PRK) and corneal collagen cross-linking (CXL) for the treatment of mild refractive errors in patients with early stage keratoconus. A retrospective, nonrandomized study of patients with early stage keratoconus (stage 1 or 2) who underwent simultaneous non-topography-guided PRK and CXL. All patients had at least 2 years of follow-up. Data were collected preoperatively and postoperatively at the 6-month, 1-year, and 2-year follow-up visit after combined non-topography-guided PRK and CXL. Seventy-nine patients (140 eyes) were included in the study. Combined non-topography-guided PRK and CXL induced a significant improvement in both visual acuity and refraction. Uncorrected distance visual acuity significantly improved from 0.39 ± 0.22 logMAR before combined non-topography-guided PRK and CXL to 0.12 ± 0.14 logMAR at the last follow-up visit (P PRK and CXL (P PRK and CXL is an effective and safe option for correcting mild refractive error and improving visual acuity in patients with early stable keratoconus. Copyright 2014, SLACK Incorporated.
Directory of Open Access Journals (Sweden)
A. Zakerian
2011-12-01
Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods. Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars. Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets. Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors . ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.
Error Correction of Loudspeakers
DEFF Research Database (Denmark)
Pedersen, Bo Rohde
of a nonlinear feed forward controller. System identification is used for tracking the loudspeaker parameters. Different system identification methods are reviewed, and the investigations ends with a simple FIR based algorithm. Finally, the parameter tracking system is tested with music signals on a 6½ inch......Throughout this thesis, the topic of electrodynamic loudspeaker unit design and modelling are reviewed. The research behind this project has been to study loudspeaker design, based on new possibilities introduced by including digital signal processing, and thereby achieving more freedom...... in loudspeaker unit design. This freedom can be used for efficiency improvements where different loudspeaker design cases show design opportunities. Optimization by size and efficiency, instead of flat frequency response and linearity, is the basis of the loudspeaker efficiency designs studied. In the project...
Directory of Open Access Journals (Sweden)
J. NIEMI
2008-12-01
Full Text Available The objecti e of this study is to increase our understanding of the specification and estimation of agricultural commodity trade models as well as to provide instruments for trade policy analysis. More specifically,the aim is to build a set of dynamic,theory-based econometric models which are able to capture both short-run and long-run effects of income and price changes,and which can be used for prediction and policy simulation under alternati e assumed conditions.A relati ely unrestricted,data determined,econometric modelling approach based on the error correction mechanism is used,in order to emphasise the importance of dynamics of trade functions.Econometric models are constructed for se en agricultural commodities cassa a,cocoa,coconut oil,palm oil,pepper, rubber,and tea exported from the Association of Southeast Asian Nations (ASEANto the European Union (EU.With the aim of providing broad commodity co erage,the intent is to explore whether the chosen modelling approach is able to catch the essentials of the behavioural relationships underlying the specialised nature of each commodity market. The import demand analysis of the study examines two key features:(1the response of EU s agricultural commodity imports to income and price changes,and (2the length of time required for this response to occur.The estimations of the export demand relationships provide tests whether the exporters market shares are influenced by the le el of relati e export price,and whether exports are affected by ariations in the rate of growth of imports.The export supply analysis examines the relati e influence of real price and some non-price factors in stimulating the supply of exports.The lag distribution (the shape and length of the lagis found to be ery critical in export supply relationships,since the effects of price changes usually take a long time to work themselves through and since the transmission of the price effects can be complex.The set of
International Nuclear Information System (INIS)
He Wei; Liu Jianyu; Li Xuan; Li Jianying; Liao Jingmin
2009-01-01
Objective: To evaluate the effect of a breath-motion-correction (BMC) technique in reducing measurement error of the time-density curve (TDC) in hepatic CT perfusion imaging. Methods: Twenty-five patients with suspected liver diseases underwent hepatic CT perfusion scans. The right branch of portal vein was selected as the anatomy of interest and performed BMC to realign image slices for the TDC according to the rule of minimizing the temporal changes of overall structures. Ten ROIs was selected on the right branch of portal vein to generate 10 TDCs each with and without BMC. The values of peak enhancement and the time-to-peak enhancement for each TDC were measured. The coefficients of variation (CV) of peak enhancement and the time-to-peak enhancement were calculated for each patient with and without BMC. Wilcoxon signed ranks test was used to evaluate the difference between the CV of the two parameters obtained with and without BMC. Independent-samples t test was used to evaluate the difference between the values of peak enhancement obtained with and without BMC. Results: The median (quartiles) of CV of peak enhancement with BMC [2.84% (2.10%, 4.57%)] was significantly lower than that without BMC [5.19% (3.90%, 7.27%)] (Z=-3.108,P<0.01). The median (quartiles) of CV of time-to-peak enhancement with BMC [2.64% (0.76%, 4.41%)] was significantly lower than that without BMC [5.23% (3.81%, 7.43%)] (Z=-3.924, P<0.01). In 8 cases, TDC demonstrated statistically significant higher peak enhancement with BMC (P<0.05). Conclusion: By applying the BMC technique we can effectively reduce measurement error for parameters of the TDC in hepatic CT perfusion imaging. (authors)
Ketkar, Amit; Zafar, Maroof K; Banerjee, Surajit; Marquez, Victor E; Egli, Martin; Eoff, Robert L
2012-06-27
Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2'-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2'-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle, which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base-stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase.
Ketkar, Amit; Zafar, Maroof K.; Banerjee, Surajit; Marquez, Victor E.; Egli, Martin; Eoff, Robert L
2012-01-01
Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2′-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2′-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle (χ), which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase. PMID:22632140
2002-01-01
Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.
Towards Adaptive Spoken Dialog Systems
Schmitt, Alexander
2013-01-01
In Monitoring Adaptive Spoken Dialog Systems, authors Alexander Schmitt and Wolfgang Minker investigate statistical approaches that allow for recognition of negative dialog patterns in Spoken Dialog Systems (SDS). The presented stochastic methods allow a flexible, portable and accurate use. Beginning with the foundations of machine learning and pattern recognition, this monograph examines how frequently users show negative emotions in spoken dialog systems and develop novel approaches to speech-based emotion recognition using hybrid approach to model emotions. The authors make use of statistical methods based on acoustic, linguistic and contextual features to examine the relationship between the interaction flow and the occurrence of emotions using non-acted recordings several thousand real users from commercial and non-commercial SDS. Additionally, the authors present novel statistical methods that spot problems within a dialog based on interaction patterns. The approaches enable future SDS to offer m...
International Nuclear Information System (INIS)
Gregory, R.B.
1991-01-01
We have recently described modifications to the program CONTIN for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a refernce material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminium (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene. (orig.)
2016-02-01
In the October In Our Unit article by Cooper et al, “Against All Odds: Preventing Pressure Ulcers in High-Risk Cardiac Surgery Patients” (Crit Care Nurse. 2015;35[5]:76–82), there was an error in the reference citation on page 82. At the top of that page, reference 18 cited on the second line should be reference 23, which also should be added to the References list: 23. AHRQ website. Prevention and treatment program integrates actionable reports into practice, significantly reducing pressure ulcers in nursing home residents. November 2008. https://innovations.ahrq.gov/profiles/prevention-and-treatment-program-integrates-actionable-reports-practice-significantly. Accessed November 18, 2015
Kelvin Balcombe; George Rapsomanikis
2008-01-01
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest ...
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.
2002-01-01
The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption. The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.
Directory of Open Access Journals (Sweden)
Sang-Wook Jin
2017-01-01
Full Text Available One of the most important issues in keeping membrane structures in stable condition is to maintain the proper stress distribution over the membrane. However, it is difficult to determine the quantitative real stress level in the membrane after the completion of the structure. The stress relaxation phenomenon of the membrane and the fluttering effect due to strong wind or ponding caused by precipitation may cause severe damage to the membrane structure itself. Therefore, it is very important to know the magnitude of the existing stress in membrane structures for their maintenance. The authors have proposed a new method for separately estimating the membrane stress in two different directions using sound waves instead of directly measuring the membrane stress. The new method utilizes the resonance phenomenon of the membrane, which is induced by sound excitations given through an audio speaker. During such experiment, the effect of the surrounding air on the vibrating membrane cannot be overlooked in order to assure high measurement precision. In this paper, an evaluation scheme for the added mass of membrane with the effect of air on the vibrating membrane and the correction of measurement error is discussed. In addition, three types of membrane materials are used in the experiment in order to verify the expandability and accuracy of the membrane measurement equipment.
Directory of Open Access Journals (Sweden)
Naziruddin Abdullah
2004-06-01
Full Text Available This study adopts the error correction model to empirically investigate the role of real stock prices in the long run-money demand in the Malaysian financial or money market for the period 1977: Q1-1997: Q2. Specifically, an attempt is made to check whether the real narrow money (M1/P is cointegrated with the selected variables like industrial production index (IPI, one-year T-Bill rates (TB12, and real stock prices (RSP. If a cointegration between the variables, i.e., the dependent and independent variables, is found to be the case, it may imply that there exists a long-run co-movement among these variables in the Malaysian money market. From the empirical results it is found that the cointegration between money demand and real stock prices (RSP is positive, implying that in the long run there is a positive association between real stock prices (RSP and demand for real narrow money (M1/P. The policy implication that can be extracted from this study is that an increase in stock prices is likely to necessitate an expansionary monetary policy to prevent nominal income or inflation target from undershooting.
International Nuclear Information System (INIS)
Brezovich, Ivan A.; Pareek, Prem N.; Plott, W. Eugene; Jennelle, Richard L. S.
1997-01-01
Purpose: The purpose of this project was the development of a quality assurance (QA) system that would provide geographically accurate targeting for linac-based stereotactic radiosurgery (LBSR). Methods and Materials: The key component of our QA system is a novel device (Alignment Tool) for expedient measurement of gantry and treatment table excursions (wobble) during rotation. The Alignment Tool replaces the familiar pencil-shaped pointers with a ball pointer that is used with the field light of the accelerator to indicate alignment of beam and target. Wobble is measured prior to each patient treatment and analyzed together with the BRW coordinates of the target by a spreadsheet. The corrections required to compensate for any imprecision are identified, and a printout generated indicating the floor stand coordinates for each couch angle used to place the target at isocenter. Results: The Alignment Tool has an inherent accuracy of measurement better than 0.1 mm. The overall targeting error of our QA method, found by evaluating 177 target simulator films of 55 foci in 40 randomly selected patients, was 0.47 ± 0.23 mm. The Alignment Tool was also valuable during installation of the floor stand and a supplemental collimator for the accelerator. Conclusions: The QA procedure described allows accurate targeting in LBSR, even when couch rotation is imprecise. The Alignment Tool can facilitate the installation of any stereotactic irradiation system, and can be useful for annual QA checks as well as in the installation and commissioning of new accelerators
International Nuclear Information System (INIS)
Wang Wei; Li Jianbin; Hu Hongguang; Ma Zhifang; Xu Min; Fan Tingyong; Shao Qian; Ding Yun
2014-01-01
Objective: To compare the differences in setup error (SE) assessment and correction between the orthogonal kilovolt X-ray images and CBCT in EB-PBI patients during free breathing. Methods: Nineteen patients after breast conserving surgery EB-PBI were recruited. Interfraction SE was acquired using orthogonal kilovolt X-ray setup images and CBCT, after on-line setup correction,calculate the residual error and compare the SE, residual error and setup margin (SM) quantified for orthogonal kilovolt X-ray images and CBCT. Wilcoxon sign-rank test was used to evaluate the differences. Results: The CBCT based SE (systematic error, ∑) was smaller than the orthogonal kilovolt X-ray images based ∑ in AP direction (-1.2 mm vs 2.00 mm; P=0.005), and there was no statistically significant differences for three dimensional directions in random error (σ) (P=0.948, 0.376, 0.314). After on-line setup correction,CBCT decreases setup residual error than the orthogonal kilovolt X-ray images in AP direction (Σ: -0.20 mm vs 0.50 mm, P=0.008; σ: 0.45 mm vs 1.34 mm, P=0.002). And also the CBCT based SM was smaller than orthogonal kilovolt X-ray images based SM in AP direction (Σ: -1.39 mm vs 5.57 mm, P=0.003; σ: 0.00 mm vs 3.2 mm, P=0.003). Conclusions: Compared with kilovolt X-ray images, CBCT underestimate the setup error in the AP direction, but decreases setup residual error significantly.An image-guided radiotherapy and setup error assessment using kilovolt X-ray images for EB-PBI plans was feasible. (authors)
Recognizing Young Readers' Spoken Questions
Chen, Wei; Mostow, Jack; Aist, Gregory
2013-01-01
Free-form spoken input would be the easiest and most natural way for young children to communicate to an intelligent tutoring system. However, achieving such a capability poses a challenge both to instruction design and to automatic speech recognition. To address the difficulties of accepting such input, we adopt the framework of predictable…
International Nuclear Information System (INIS)
Wust, Peter; Graf, Reinhold; Boehmer, Dirk; Budach, Volker
2010-01-01
Purpose: To evaluate the residual errors and required safety margins after stereoscopic kilovoltage (kV) X-ray target localization of the prostate in image-guided radiotherapy (IGRT) using internal fiducials. Patients and Methods: Radiopaque fiducial markers (FMs) have been inserted into the prostate in a cohort of 33 patients. The ExacTrac/Novalis Body trademark X-ray 6d image acquisition system (BrainLAB AG, Feldkirchen, Germany) was used. Corrections were performed in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) direction. Rotational errors around LR (x-axis), AP (y) and SI (z) have been recorded for the first series of nine patients, and since 2007 for the subsequent 24 patients in addition corrected in each fraction by using the Robotic Tilt Module trademark and Varian Exact Couch trademark. After positioning, a second set of X-ray images was acquired for verification purposes. Residual errors were registered and again corrected. Results: Standard deviations (SD) of residual translational random errors in LR, AP, and SI coordinates were 1.3, 1.7, and 2.2 mm. Residual random rotation errors were found for lateral (around x, tilt), vertical (around y, table), and longitudinal (around z, roll) and of 3.2 , 1.8 , and 1.5 . Planning target volume (PTV)-clinical target volume (CTV) margins were calculated in LR, AP, and SI direction to 2.3, 3.0, and 3.7 mm. After a second repositioning, the margins could be reduced to 1.8, 2.1, and 1.8 mm. Conclusion: On the basis of the residual setup error measurements, the margin required after one to two online X-ray corrections for the patients enrolled in this study would be at minimum 2 mm. The contribution of intrafractional motion to residual random errors has to be evaluated. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Wust, Peter [Dept. of Radiation Oncology, Charite - Univ. Medicine Berlin, Campus Virchow-Klinikum, Berlin (Germany); Graf, Reinhold; Boehmer, Dirk; Budach, Volker
2010-10-15
Purpose: To evaluate the residual errors and required safety margins after stereoscopic kilovoltage (kV) X-ray target localization of the prostate in image-guided radiotherapy (IGRT) using internal fiducials. Patients and Methods: Radiopaque fiducial markers (FMs) have been inserted into the prostate in a cohort of 33 patients. The ExacTrac/Novalis Body trademark X-ray 6d image acquisition system (BrainLAB AG, Feldkirchen, Germany) was used. Corrections were performed in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) direction. Rotational errors around LR (x-axis), AP (y) and SI (z) have been recorded for the first series of nine patients, and since 2007 for the subsequent 24 patients in addition corrected in each fraction by using the Robotic Tilt Module trademark and Varian Exact Couch trademark. After positioning, a second set of X-ray images was acquired for verification purposes. Residual errors were registered and again corrected. Results: Standard deviations (SD) of residual translational random errors in LR, AP, and SI coordinates were 1.3, 1.7, and 2.2 mm. Residual random rotation errors were found for lateral (around x, tilt), vertical (around y, table), and longitudinal (around z, roll) and of 3.2 , 1.8 , and 1.5 . Planning target volume (PTV)-clinical target volume (CTV) margins were calculated in LR, AP, and SI direction to 2.3, 3.0, and 3.7 mm. After a second repositioning, the margins could be reduced to 1.8, 2.1, and 1.8 mm. Conclusion: On the basis of the residual setup error measurements, the margin required after one to two online X-ray corrections for the patients enrolled in this study would be at minimum 2 mm. The contribution of intrafractional motion to residual random errors has to be evaluated. (orig.)
Westbrook, Johanna I; Rob, Marilyn I; Woods, Amanda; Parry, Dave
2011-01-01
Background Intravenous medication administrations have a high incidence of error but there is limited evidence of associated factors or error severity. Objective To measure the frequency, type and severity of intravenous administration errors in hospitals and the associations between errors, procedural failures and nurse experience. Methods Prospective observational study of 107 nurses preparing and administering 568 intravenous medications on six wards across two teaching hospitals. Procedur...
Directory of Open Access Journals (Sweden)
Lya Aklimawati
2013-12-01
Full Text Available High volatility cocoa price movement is consequenced by imbalancing between power demand and power supply in commodity market. World economy expectation and market liberalization would lead to instability on cocoa prices in the international commerce. Dynamic prices moving erratically influence the benefit of market players, particularly producers. The aim of this research is (1 to estimate the empirical cocoa prices model for responding market dynamics and (2 analyze short-term and long-term effect of price determinants variables on cocoa prices. This research was carried out by analyzing annualdata from 1980 to 2011, based on secondary data. Error correction mechanism (ECM approach was used to estimate the econometric model of cocoa price.The estimation results indicated that cocoa price was significantly affected by exchange rate IDR-USD, world gross domestic product, world inflation, worldcocoa production, world cocoa consumption, world cocoa stock and Robusta prices at varied significance level from 1 - 10%. All of these variables have a long run equilibrium relationship. In long run effect, world gross domestic product, world cocoa consumption and world cocoa stock were elastic (E >1, while other variables were inelastic (E <1. Variables that affecting cocoa pricesin short run equilibrium were exchange rate IDR-USD, world gross domestic product, world inflation, world cocoa consumption and world cocoa stock. The analysis results showed that world gross domestic product, world cocoa consumption and world cocoa stock were elastic (E >1 to cocoa prices in short-term. Whereas, the response of cocoa prices was inelastic to change of exchange rate IDR-USD and world inflation.Key words: Price
Sofyan, Hizir; Maulia, Eva; Miftahuddin
2017-11-01
A country has several important parameters to achieve economic prosperity, such as tax revenue and inflation rate. One of the largest revenues of the State Budget in Indonesia comes from the tax sector. Meanwhile, the rate of inflation occurring in a country can be used as an indicator, to measure the good and bad economic problems faced by the country. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the structure of tax revenue relations and inflation rate. This study aims to produce the best VECM (Vector Error Correction Model) with optimal lag using various alpha and perform structural analysis using the Impulse Response Function (IRF) of the VECM models to examine the relationship of tax revenue, and inflation in Banda Aceh. The results showed that the best model for the data of tax revenue and inflation rate in Banda Aceh City using alpha 0.01 is VECM with optimal lag 2, while the best model for data of tax revenue and inflation rate in Banda Aceh City using alpha 0.05 and 0,1 VECM with optimal lag 3. However, the VECM model with alpha 0.01 yielded four significant models of income tax model, inflation rate of Banda Aceh, inflation rate of health and inflation rate of education in Banda Aceh. While the VECM model with alpha 0.05 and 0.1 yielded one significant model that is income tax model. Based on the VECM models, then there are two structural analysis IRF which is formed to look at the relationship of tax revenue, and inflation in Banda Aceh, the IRF with VECM (2) and IRF with VECM (3).
Directory of Open Access Journals (Sweden)
Stamatović Dragana
2007-01-01
Full Text Available Introduction: Monitoring of peak expiratory flow (PEF is recommended in numerous guidelines for management of asthma. Improvements in calibration methods have demonstrated the inaccuracy of original Wright scale of peak flowmeter. A new standard, EN 13826 that was applied to peak flowmeter was adopted on 1st September 2004 by some European countries. Correction of PEF readings obtained with old type devices for measurement is possible by Dr M. Miller’s original predictive equation. Objective. Assessment of PEF correction effect on the interpretation of measurement results and management decisions. Method. In children with intermittent (35 or stable persistent asthma (75 aged 6-16 years, there were performed 8393 measurements of PEF by Vitalograph normal-range peak flowmeter with traditional Wright scale. Readings were expressed as percentage of individual best values (PB before and after correction. The effect of correction was analyzed based on The British Thoracic Society guidelines for asthma attack treatment. Results. In general, correction reduced the values of PEF (p<0.01. The highest mean percentage error (20.70% in the measured values was found in the subgroup in which PB ranged between 250 and 350 l/min. Nevertheless, the interpretation of PEF after the correction in this subgroup changed in only 2.41% of measurements. The lowest mean percentage error (15.72%, and, at the same time, the highest effect of correction on measurement results interpretation (in 22.65% readings were in children with PB above 450 l/min. In 73 (66.37% subjects, the correction changed the clinical interpretation of some values of PEF after correction. In 13 (11.8% patients, some corrected values indicated the absence or a milder degree of airflow obstruction. In 27 (24.54% children, more than 10%, and in 12 (10.93%, more than 20% of the corrected readings indicated a severe degree of asthma exacerbation that needed more aggressive treatment. Conclusion
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright Â© 2010 Elsevier Inc. All rights reserved.
Vallot, Antoine; Leontiou, Ioanna; Cladière, Damien; El Yakoubi, Warif; Bolte, Susanne; Buffin, Eulalie; Wassmann, Katja
2018-01-08
Cell division with partitioning of the genetic material should take place only when paired chromosomes named bivalents (meiosis I) or sister chromatids (mitosis and meiosis II) are correctly attached to the bipolar spindle in a tension-generating manner. For this to happen, the spindle assembly checkpoint (SAC) checks whether unattached kinetochores are present, in which case anaphase onset is delayed to permit further establishment of attachments. Additionally, microtubules are stabilized when they are attached and under tension. In mitosis, attachments not under tension activate the so-named error correction pathway depending on Aurora B kinase substrate phosphorylation. This leads to microtubule detachments, which in turn activates the SAC [1-3]. Meiotic divisions in mammalian oocytes are highly error prone, with severe consequences for fertility and health of the offspring [4, 5]. Correct attachment of chromosomes in meiosis I leads to the generation of stretched bivalents, but-unlike mitosis-not to tension between sister kinetochores, which co-orient. Here, we set out to address whether reduction of tension applied by the spindle on bioriented bivalents activates error correction and, as a consequence, the SAC. Treatment of oocytes in late prometaphase I with Eg5 kinesin inhibitor affects spindle tension, but not attachments, as we show here using an optimized protocol for confocal imaging. After Eg5 inhibition, bivalents are correctly aligned but less stretched, and as a result, Aurora-B/C-dependent error correction with microtubule detachment takes place. This loss of attachments leads to SAC activation. Crucially, SAC activation itself does not require Aurora B/C kinase activity in oocytes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Allan C. Just
2018-05-01
Full Text Available Satellite-derived estimates of aerosol optical depth (AOD are key predictors in particulate air pollution models. The multi-step retrieval algorithms that estimate AOD also produce quality control variables but these have not been systematically used to address the measurement error in AOD. We compare three machine-learning methods: random forests, gradient boosting, and extreme gradient boosting (XGBoost to characterize and correct measurement error in the Multi-Angle Implementation of Atmospheric Correction (MAIAC 1 × 1 km AOD product for Aqua and Terra satellites across the Northeastern/Mid-Atlantic USA versus collocated measures from 79 ground-based AERONET stations over 14 years. Models included 52 quality control, land use, meteorology, and spatially-derived features. Variable importance measures suggest relative azimuth, AOD uncertainty, and the AOD difference in 30–210 km moving windows are among the most important features for predicting measurement error. XGBoost outperformed the other machine-learning approaches, decreasing the root mean squared error in withheld testing data by 43% and 44% for Aqua and Terra. After correction using XGBoost, the correlation of collocated AOD and daily PM2.5 monitors across the region increased by 10 and 9 percentage points for Aqua and Terra. We demonstrate how machine learning with quality control and spatial features substantially improves satellite-derived AOD products for air pollution modeling.
Suharsono, Agus; Aziza, Auliya; Pramesti, Wara
2017-12-01
Capital markets can be an indicator of the development of a country's economy. The presence of capital markets also encourages investors to trade; therefore investors need information and knowledge of which shares are better. One way of making decisions for short-term investments is the need for modeling to forecast stock prices in the period to come. Issue of stock market-stock integration ASEAN is very important. The problem is that ASEAN does not have much time to implement one market in the economy, so it would be very interesting if there is evidence whether the capital market in the ASEAN region, especially the countries of Indonesia, Malaysia, Philippines, Singapore and Thailand deserve to be integrated or still segmented. Furthermore, it should also be known and proven What kind of integration is happening: what A capital market affects only the market Other capital, or a capital market only Influenced by other capital markets, or a Capital market as well as affecting as well Influenced by other capital markets in one ASEAN region. In this study, it will compare forecasting of Indonesian share price (IHSG) with neighboring countries (ASEAN) including developed and developing countries such as Malaysia (KLSE), Singapore (SGE), Thailand (SETI), Philippines (PSE) to find out which stock country the most superior and influential. These countries are the founders of ASEAN and share price index owners who have close relations with Indonesia in terms of trade, especially exports and imports. Stock price modeling in this research is using multivariate time series analysis that is VAR (Vector Autoregressive) and VECM (Vector Error Correction Modeling). VAR and VECM models not only predict more than one variable but also can see the interrelations between variables with each other. If the assumption of white noise is not met in the VAR modeling, then the cause can be assumed that there is an outlier. With this modeling will be able to know the pattern of relationship
International Nuclear Information System (INIS)
Takahashi, Yasuyuki; Murase, Kenya; Mochizuki, Teruhito; Motomura, Nobutoku
2002-01-01
Attenuation correction with an X-ray CT image is a new method to correct attenuation on SPECT imaging, but the effect of the registration errors between CT and SPECT images is unclear. In this study, we investigated the effects of the registration errors on myocardial SPECT, analyzing data from a phantom and a human volunteer. Registerion (fusion) of the X-ray CT and SPECT images was done with standard packaged software in three dimensional fashion, by using linked transaxial, coronal and sagittal images. In the phantom study, and X-ray CT image was shifted 1 to 3 pixels on the x, y and z axes, and rotated 6 degrees clockwise. Attenuation correction maps generated from each misaligned X-ray CT image were used to reconstruct misaligned SPECT images of the phantom filled with 201 Tl. In a human volunteer, X-ray CT was acquired in different conditions (during inspiration vs. expiration). CT values were transferred to an attenuation constant by using straight lines; an attenuation constant of 0/cm in the air (CT value=-1,000 HU) and that of 0.150/cm in water (CT value=0 HU). For comparison, attenuation correction with transmission CT (TCT) data and an external γ-ray source ( 99m Tc) was also applied to reconstruct SPECT images. Simulated breast attenuation with a breast attachment, and inferior wall attenuation were properly corrected by means of the attenuation correction map generated from X-ray CT. As pixel shift increased, deviation of the SPECT images increased in misaligned images in the phantom study. In the human study, SPECT images were affected by the scan conditions of the X-ray CT. Attenuation correction of myocardial SPECT with an X-ray CT image is a simple and potentially beneficial method for clinical use, but accurate registration of the X-ray CT to SPECT image is essential for satisfactory attenuation correction. (author)
Energy Technology Data Exchange (ETDEWEB)
Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)
2012-10-15
Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction
International Nuclear Information System (INIS)
Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.
2012-01-01
Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II–IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction
CROATIAN ADULT SPOKEN LANGUAGE CORPUS (HrAL
Directory of Open Access Journals (Sweden)
Jelena Kuvač Kraljević
2016-01-01
Full Text Available Interest in spoken-language corpora has increased over the past two decades leading to the development of new corpora and the discovery of new facets of spoken language. These types of corpora represent the most comprehensive data source about the language of ordinary speakers. Such corpora are based on spontaneous, unscripted speech defined by a variety of styles, registers and dialects. The aim of this paper is to present the Croatian Adult Spoken Language Corpus (HrAL, its structure and its possible applications in different linguistic subfields. HrAL was built by sampling spontaneous conversations among 617 speakers from all Croatian counties, and it comprises more than 250,000 tokens and more than 100,000 types. Data were collected during three time slots: from 2010 to 2012, from 2014 to 2015 and during 2016. HrAL is today available within TalkBank, a large database of spoken-language corpora covering different languages (https://talkbank.org, in the Conversational Analyses corpora within the subsection titled Conversational Banks. Data were transcribed, coded and segmented using the transcription format Codes for Human Analysis of Transcripts (CHAT and the Computerised Language Analysis (CLAN suite of programmes within the TalkBank toolkit. Speech streams were segmented into communication units (C-units based on syntactic criteria. Most transcripts were linked to their source audios. The TalkBank is public free, i.e. all data stored in it can be shared by the wider community in accordance with the basic rules of the TalkBank. HrAL provides information about spoken grammar and lexicon, discourse skills, error production and productivity in general. It may be useful for sociolinguistic research and studies of synchronic language changes in Croatian.
Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary
2015-06-30
Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase
Directory of Open Access Journals (Sweden)
P.A.V.B. Swamy
2017-02-01
Full Text Available Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.
International Nuclear Information System (INIS)
Krini, Ossmane; Börcsök, Josef
2012-01-01
In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.
Introducing Spoken Dialogue Systems into Intelligent Environments
Heinroth, Tobias
2013-01-01
Introducing Spoken Dialogue Systems into Intelligent Environments outlines the formalisms of a novel knowledge-driven framework for spoken dialogue management and presents the implementation of a model-based Adaptive Spoken Dialogue Manager(ASDM) called OwlSpeak. The authors have identified three stakeholders that potentially influence the behavior of the ASDM: the user, the SDS, and a complex Intelligent Environment (IE) consisting of various devices, services, and task descriptions. The theoretical foundation of a working ontology-based spoken dialogue description framework, the prototype implementation of the ASDM, and the evaluation activities that are presented as part of this book contribute to the ongoing spoken dialogue research by establishing the fertile ground of model-based adaptive spoken dialogue management. This monograph is ideal for advanced undergraduate students, PhD students, and postdocs as well as academic and industrial researchers and developers in speech and multimodal interactive ...
Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal
2016-09-30
Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.
2014-01-01
Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small
International Nuclear Information System (INIS)
Heid, Matthias; Luetkenhaus, Norbert
2006-01-01
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered
Dion, Nathalie; Cotart, Jean-Louis; Rabilloud, Muriel
2007-04-01
We quantified the link between tooth deterioration and malnutrition in institutionalized elderly subjects, taking into account the major risk factors for malnutrition and adjusting for the measurement error made in using the Mini Nutritional Assessment questionnaire. Data stem from a survey conducted in 2005 in 1094 subjects >or=60 y of age from a large sample of 100 institutions of the Rhône-Alpes region of France. A Bayesian approach was used to quantify the effect of tooth deterioration on malnutrition through a two-level logistic regression. This approach allowed taking into account the uncertainty on sensitivity and specificity of the Mini Nutritional Assessment questionnaire to adjust for the measurement error of that test. After adjustment for other risk factors, the risk of malnutrition increased significantly and continuously 1.15 times (odds ratio 1.15, 95% credibility interval 1.06-1.25) whenever the masticatory percentage decreased by 10 points, which is equivalent to the loss of two molars. The strongest factors that augmented the probability of malnutrition were deglutition disorders, depression, and verbal inconsistency. Dependency was also an important factor; the odds of malnutrition nearly doubled for each additional grade of dependency (graded 6 to 1). Diabetes, central neurodegenerative disease, and carcinoma tended to increase the probability of malnutrition but their effect was not statistically significant. Dental status should be considered a serious risk factor for malnutrition. Regular dental examination and care should preserve functional dental integrity to prevent malnutrition in institutionalized elderly people.
Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce
2015-01-01
Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.
DEFF Research Database (Denmark)
Turcot, Valérie; Lu, Yingchang; Highland, Heather M
2018-01-01
In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article.......In the published version of this paper, the name of author Emanuele Di Angelantonio was misspelled. This error has now been corrected in the HTML and PDF versions of the article....
DEFF Research Database (Denmark)
Grundle, D S; Löscher, C R; Krahmann, G
2018-01-01
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.......A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper....
Enhancing spoken connected-digit recognition accuracy by error ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
nition systems have gained acceptable accuracy levels, the accuracy of recognition of current connected ... bar code and ISBN1 library code to name a few. ..... Kopec G, Bush M 1985 Network-based connected-digit recognition. IEEE Trans.
Baltussen, Rob; Naus, Jeroen; Limburg, Hans
2009-02-01
To estimate the costs and effects of alternative strategies for annual screening of school children for refractive errors, and the provision of spectacles, in different WHO sub-regions in Africa, Asia, America and Europe. We developed a mathematical simulation model for uncorrected refractive error, using prevailing prevalence and incidence rates. Remission rates reflected the absence or presence of screening strategies for school children. All screening strategies were implemented for a period of 10 years and were compared to a situation were no screening was implemented. Outcome measures were life years adjusted for disability (DALYs), costs of screening and provision of spectacles and follow-up for six different screening strategies, and cost-effectiveness in international dollars per DALY averted. Epidemiological information was derived from the burden of disease study from the World Health Organization (WHO). Cost data were derived from large databases from the WHO. Both univariate and multivariate sensitivity analyses were performed on key parameters to determine the robustness of the model results. In all regions, screening of 5-15 years old children yields most health effects, followed by screening of 11-15 years old, 5-10 years old, and screening of 8 and 13 years old. Screening of broad-age intervals is always more costly than screening of single-age intervals, and there are important economies of scale for simultaneous screening of both 5-10 and 11-15-year-old children. In all regions, screening of 11-15 years old is the most cost-effective intervention, with the cost per DALY averted ranging from I$67 per DALY averted in the Asian sub-region to I$458 per DALY averted in the European sub-region. The incremental cost per DALY averted of screening 5-15 years old ranges between I$111 in the Asian sub-region to I$672 in the European sub-region. Considering the conservative study assumptions and the robustness of study conclusions towards changes in these
Swornowski, Pawel J
2013-01-01
The article presents the application of neural networks in determining and correction of the deformation of a coordinate measuring machine (CMM) workspace. The information about the CMM errors is acquired using an ADXRS401 electronic gyroscope. A test device (PS-20 module) was built and integrated with a commercial measurement system based on the SP25M passive scanning probe and with a PH10M module (Renishaw). The proposed solution was tested on a Kemco 600 CMM and on a DEA Global Clima CMM. In the former case, correction of the CMM errors was performed using the source code of WinIOS software owned by The Institute of Advanced Manufacturing Technology, Cracow, Poland and in the latter on an external PC. Optimum parameters of full and simplified mapping of a given layer of the CMM workspace were determined for practical applications. The proposed method can be employed for the interim check (ISO 10360-2 procedure) or to detect local CMM deformations, occurring when the CMM works at high scanning speeds (>20 mm/s). © Wiley Periodicals, Inc.
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Nagata, Takeshi; Iwata, Suehiro
2004-02-22
The locally projected self-consistent field molecular orbital method for molecular interaction (LP SCF MI) is reformulated for multifragment systems. For the perturbation expansion, two types of the local excited orbitals are defined; one is fully local in the basis set on a fragment, and the other has to be partially delocalized to the basis sets on the other fragments. The perturbation expansion calculations only within single excitations (LP SE MP2) are tested for water dimer, hydrogen fluoride dimer, and colinear symmetric ArM+ Ar (M = Na and K). The calculated binding energies of LP SE MP2 are all close to the corresponding counterpoise corrected SCF binding energy. By adding the single excitations, the deficiency in LP SCF MI is thus removed. The results suggest that the exclusion of the charge-transfer effects in LP SCF MI might indeed be the cause of the underestimation for the binding energy. (c) 2004 American Institute of Physics.
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity
Wong, Miranda Kit-Yi; So, Wing Chee
2016-01-01
This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…
Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi
2017-12-01
Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.
Ibrahim, Musadiq; Lapthorn, Adrian Jonathan; Ibrahim, Mohammad
2017-08-01
The Protein Data Bank (PDB) is the single most important repository of structural data for proteins and other biologically relevant molecules. Therefore, it is critically important to keep the PDB data, error-free as much as possible. In this study, we have critically examined PDB structures of 292 protein molecules which have been deposited in the repository along with potentially incorrect ligands labelled as Unknown ligands (UNK). Pharmacophores were generated for all the protein structures by using Discovery Studio Visualizer (DSV) and Accelrys, Catalyst ® . The generated pharmacophores were subjected to the database search containing the reported ligand. Ligands obtained through Pharmacophore searching were then checked for fitting the observed electron density map by using Coot ® . The predicted ligands obtained via Pharmacophore searching fitted well with the observed electron density map, in comparison to the ligands reported in the PDB's. Based on our study we have learned that till may 2016, among 292 submitted structures in the PDB, at least 20 structures have ligands with a clear electron density but have been incorrectly labelled as unknown ligands (UNK). We have demonstrated that Pharmacophore searching and Coot ® can provide potential help to find suitable known ligands for these protein structures, the former for ligand search and the latter for electron density analysis. The use of these two techniques can facilitate the quick and reliable labelling of ligands where the electron density map serves as a reference. Copyright © 2017 Elsevier Inc. All rights reserved.
Langner, Andy Sven; Rossbach, Jörg; Tomás, Rogelio
2017-02-17
The Large Hadron Collider (LHC) is currently the world's largest particle accelerator with the highest center of mass energy in particle collision experiments. The control of the particle beam focusing is essential for the performance reach of such an accelerator. For the characterization of the focusing properties at the LHC, turn-by-turn beam position data is simultaneously recorded at numerous measurement devices (BPMs) along the accelerator, while an oscillation is excited on the beam. A novel analysis method for these measurements ($N$-BPM method) is developed here, which is based on a detailed analysis of systematic and statistical error sources and their correlations. It has been applied during the commissioning of the LHC for operation at an unprecedented energy of 6.5 TeV. In this process a stronger focusing than its design specifications has been achieved. This results in smaller transverse beam sizes at the collision points and allows for a higher rate of particle collisions. For the derivation of ...
Basic speech recognition for spoken dialogues
CSIR Research Space (South Africa)
Van Heerden, C
2009-09-01
Full Text Available Spoken dialogue systems (SDSs) have great potential for information access in the developing world. However, the realisation of that potential requires the solution of several challenging problems, including the development of sufficiently accurate...
Directory of Open Access Journals (Sweden)
Alahmady Hamad Alsmman Hassan
2016-01-01
Full Text Available In this study we evaluate the visual outcomes, safety, efficacy, and stability of implanting of second sulcus intraocular lens (IOL to correct unsatisfied ametropic patients after phacoemulsification. Methods. Retrospective study of 15 eyes (15 patients underwent secondary intraocular lens implanted into the ciliary sulcus. The IOL used was a Sensar IOL three-piece foldable hydrophobic acrylic IOL. The first IOL in all patients was acrylic intrabagal IOL implanted in uncomplicated phacoemulsification surgery. Results. Fifteen eyes (15 patients were involved in this study. Preoperatively, mean logMAR UDVA and CDVA were 0.88 ± 0.22 and 0.19 ± 0.13, respectively, with a mean follow-up of 28 months (range: 24 to 36 months. At the end of the follow-up, all eyes achieved logMAR UDVA of 0.20 ± 0.12 with postoperative refraction ranging from 0.00 to −0.50 D of attempted emmetropia. Conclusions. Implantation of the second sulcus SensarAR40 IOL was found to be safe, easy, and simple technique for management of ametropia following uncomplicated phacoemulsification.
International Nuclear Information System (INIS)
Hameeteman, K; Niessen, W J; Klein, S; Van 't Klooster, R; Selwaness, M; Van der Lugt, A; Witteman, J C M
2013-01-01
We present a method for carotid vessel wall volume quantification from magnetic resonance imaging (MRI). The method combines lumen and outer wall segmentation based on deformable model fitting with a learning-based segmentation correction step. After selecting two initialization points, the vessel wall volume in a region around the bifurcation is automatically determined. The method was trained on eight datasets (16 carotids) from a population-based study in the elderly for which one observer manually annotated both the lumen and outer wall. An evaluation was carried out on a separate set of 19 datasets (38 carotids) from the same study for which two observers made annotations. Wall volume and normalized wall index measurements resulting from the manual annotations were compared to the automatic measurements. Our experiments show that the automatic method performs comparably to the manual measurements. All image data and annotations used in this study together with the measurements are made available through the website http://ergocar.bigr.nl. (paper)
Energy Technology Data Exchange (ETDEWEB)
Langner, Andy Sven
2017-02-03
The Large Hadron Collider (LHC) is currently the world's largest particle accelerator with the highest center of mass energy in particle collision experiments. The control of the particle beam focusing is essential for the performance reach of such an accelerator. For the characterization of the focusing properties at the LHC, turn-by-turn beam position data is simultaneously recorded at numerous measurement devices (BPMs) along the accelerator, while an oscillation is excited on the beam. A novel analysis method for these measurements (N-BPM method) is developed here, which is based on a detailed analysis of systematic and statistical error sources and their correlations. It has been applied during the commissioning of the LHC for operation at an unprecedented energy of 6.5TeV. In this process a stronger focusing than its design specifications has been achieved. This results in smaller transverse beam sizes at the collision points and allows for a higher rate of particle collisions. For the derivation of the focusing parameters at many synchrotron light sources, the change of the beam orbit is observed, which is induced by deliberate changes of magnetic fields (orbit response matrix). In contrast, the analysis of turn-by-turn beam position measurements is for many of these machines less precise due to the distance between two BPMs. The N-BPM method overcomes this limitation by allowing to include the measurement data from more BPMs in the analysis. It has been applied at the ALBA synchrotron light source and compared to the orbit response method. The significantly faster measurement with the N-BPM method is a considerable advantage in this case. Finally, an outlook is given to the challenges which lie ahead for the control of the beam focusing at the HL-LHC, which is a future major upgrade of the LHC.
Agogo, George O; van der Voet, Hilko; van't Veer, Pieter; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Kruse, Holger; Grimme, Stefan
2012-04-21
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model
Kruse, Holger; Grimme, Stefan
2012-04-01
A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model
Syllable Frequency and Spoken Word Recognition: An Inhibitory Effect.
González-Alvarez, Julio; Palomar-García, María-Angeles
2016-08-01
Research has shown that syllables play a relevant role in lexical access in Spanish, a shallow language with a transparent syllabic structure. Syllable frequency has been shown to have an inhibitory effect on visual word recognition in Spanish. However, no study has examined the syllable frequency effect on spoken word recognition. The present study tested the effect of the frequency of the first syllable on recognition of spoken Spanish words. A sample of 45 young adults (33 women, 12 men; M = 20.4, SD = 2.8; college students) performed an auditory lexical decision on 128 Spanish disyllabic words and 128 disyllabic nonwords. Words were selected so that lexical and first syllable frequency were manipulated in a within-subject 2 × 2 design, and six additional independent variables were controlled: token positional frequency of the second syllable, number of phonemes, position of lexical stress, number of phonological neighbors, number of phonological neighbors that have higher frequencies than the word, and acoustical durations measured in milliseconds. Decision latencies and error rates were submitted to linear mixed models analysis. Results showed a typical facilitatory effect of the lexical frequency and, importantly, an inhibitory effect of the first syllable frequency on reaction times and error rates. © The Author(s) 2016.
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
International Nuclear Information System (INIS)
Williams, W.G.
1975-01-01
The use of the polarization analysis technique to separate spin-flip from non-spin-flip thermal neutron scattering is especially important in determining magnetic scattering cross-sections. In order to identify a spin-flip ratio in the scattering with a particular scattering process, it is necessary to correct the experimentally observed 'flipping-ratio' to allow for the efficiencies of the vital instrument components (polarizers and spin-flippers), as well as multiple scattering effects in the sample. Analytical expressions for these corections are presented and their magnitudes in typical cases estimated. The errors in measurement depend strongly on the uncertainties in the calibration of the efficiencies of the polarizers and the spin-flipper. The final section is devoted to a discussion of polarization analysis instruments