System and method for forward error correction
Cole, Robert M. (Inventor); Bishop, James E. (Inventor)
2006-01-01
A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.
Methods for Correction of Refractive Errors.
1984-12-31
H. Feshbach, Methods of Theoretical Physics, Part I, Chap 7, McGraw-Hill Book Co., 1953. [Morse681 P. M. Morse and K. U. Ingard , Theoretical Acoustics...Theoretical Physics, Part I, Chap 7, McGraw-Hill Book Co., 1953. [Morse681 P. M. Morse and K. V. Ingard , Theoretical Acoustics, Chap. 8, McGraw- Hill Book Co
Correction of placement error in EBL using model based method
Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro
2016-10-01
The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.
Minimum mean square error method for stripe nonuniformity correction
Weixian Qian; Qian Chen; Guohua Gu
2011-01-01
@@ Stripe nonuniformity is very typical in line infrared focal plane (IRFPA) and uncooled starring IRFPA.We develop the minimum mean square error (MMSE) method for stripe nonuniformity correction (NUC).The goal of the MMSE method is to determine the optimal NUC parameters for making the corrected image the closest to the ideal image.%Stripe nonuniformity is very typical in line infrared focal plane (IRFPA) and uncooled starring IRFPA.We develop the minimum mean square error (MMSE) method for stripe nonuniformity correction (NUC).The goal of the MMSE method is to determine the optimal NUC parameters for making the corrected image the closest to the ideal image. Moreover, this method can be achieved in one frame, making it more competitive than other scene-based NUC algorithms. We also demonstrate the calibration results of our algorithm using real and virtual infrared image sequences. The experiments verify the positive effect of our algorithm.
Error correction method of 6-HTRT parallel mechanics
ZHANG Xiu-feng; JI Lin-hong
2007-01-01
The method of error correction is one of key techniques of parallel robot. A new method of end error correction of 6-HTRT parallel robot is presented for engineering and researching on correlative theory of 6-HTRT parallel robot. The method need calculate many kinematics equations of parallel robot such as position back solution, velocity Jacobin, position forward solution and error Jacobin. New methods presented for solving these questions are simpler and fitter for programming and calculating, because former methods are too complex to use in engineering. These questions may be solved by iterative method of numerical value which has fast velocity of calculating. These new methods may be used in other mechanism of parallel robot too, and so have wider using value. The experimental results demonstrate that the system may satisfy entirely high technical request and fit for engineering in new measures.
Equation-Method for correcting clipping errors in OFDM signals.
Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry
2016-01-01
Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.
ERROR CORRECTION METHOD FOR SEQUENCING DATA WITH INSERTIONS AND DELETIONS
A. V. Alexandrov
2016-01-01
Full Text Available Subject of Research.A method for error correction for sequencing reads of a haploid organism with insertions and deletions was developed. It was tested on two libraries: a synthesized dataset for Escherichia coli bacterium and a real dataset of reads for Pseudomonas stutzeri. Method. The method is based on using k-mers but only for finding reads that are close to each other. For the close reads a consensus string is created which is then used for correcting errors in the initial reads. Main Results. The algorithm is implemented as a separated program. The program has been tested on both real and synthesized data. The method performance is higher than that of the other known methods (N50 metric was used as well as total contig length and maximal contig length as metrics for comparison. Practical Relevance. The method can be used together with known genome assembly methods not suitable for application with the reads containing insertion and deletion errors.
The contour method cutting assumption: error minimization and correction
Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL
2010-01-01
The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.
Error correction method and apparatus for electronic timepieces
Davidson, J. R.; Heyman, J. S. (Inventor)
1983-01-01
A method and apparatus for correcting errors in an electronic digital timepiece that includes an oscillator which has a 2 in. frequency output, an n-stage frequency divider for reducing the oscillator output frequency to a time keeping frequency, and means for displaying the count of the time keeping frequency. In first and second embodiments of the invention the timepiece is synchronized with a time standard at the beginning of the period of time T. In the first embodiment of the invention the timepiece user observes E (the difference between the time standard and the timepiece time at the end of the period T) and then operates a switch to correct the time of the timepiece and to obtain a count for E. In the second embodiment of the invention, the user operates a switch at the beginning of T and at the end of T and a count for E is obtained electronically.
A Systematic Error Correction Method for TOVS Radiances
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Error-finding and error-correcting methods for the start-up of the SLC
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.
1987-02-01
During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper.
Correcting Non-Word Errors Using a Combined Method
无
2005-01-01
The weighted edit distance and metaphone + algorithm are combined to correct the non-word errors. The speed is also optimized based on the observation that people rarely make mistakes in the initial letter of a word. A spelling checker is designed for an automatic detection and correction system for student essays. To evaluate the algorithm it is compared to some famous systems (MS Word2000, Aspell, winEdt). The results show that our approach is superior to the alternative approaches.
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
A temperature error correction method for a naturally ventilated radiation shield
Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Rrenhui
2016-11-01
Due to solar radiation exposure, air flowing inside a naturally ventilated radiation shield may produce a measurement error of 0.8 °C or higher. To improve the air temperature observation accuracy, a temperature error correction method is proposed. The correction method is based on a Computational Fluid Dynamics (CFD) method and a Genetic Algorithm (GA) method. The CFD method is implemented to analyze and calculate the temperature errors of a naturally ventilated radiation shield under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean temperature error given by measurements is 0.36 °C, and the mean temperature error given by correction equation is 0.34 °C. This correction equation allows the temperature error to be reduced by approximately 95%. The mean absolute error (MAE) and the root mean square error (RMSE) between the temperature errors given by the correction equation and the temperature errors given by the measurements are 0.07 °C and 0.08 °C, respectively.
[A phase error correction method for the new Fourier transforms spectrometer].
Wang, Ning; Gong, Tian-Cheng; Chen, Jian-Jun; Li, Yang; Yang, Yi-Ning; Zhu, Yong; Zhang, Jie; Chen, Wei-Min
2014-11-01
To decrease the distortion of the recovered spectrum, improve the quantity of the recovered spectrum and decrease the influence of the phase error of the new spectrum detection system based on MEMS (micro-electro-mechanical systems) micro-mirrors, a new phase error correction method for this system is proposed in the present paper. The source of phase error of the spectrum detection system based on MEMS micro-mirrors is analyzed firstly. The analyzed result indicated that the phase error of the new spectral Fourier transform detection system is the zero drift of the optical path difference, and the phase error can be corrected by Zero-crossing sampling which is realized by improving the structure of the interferometer system and Mertz product The spectrum detection system is set up and the phase error correction method is verified by this system. The experiment result is show that the quantity of the recovered spectrum of the spectrum detection is improved obviously by using the improved interferometer system and Mertz product, and the recovered spectrum has no negative peaks and the side lobes is suppressed markedly. This correction method can reduce the influence caused by phase error to the system performance well and improve the spectral detection performance effectively. In this paper, the origin of the system phase error based on the new MEMS micromirror Fourier transform spectroscopy detection system is analyzed, and the phase error correction method is proposed. This method can improve the performance of the spectrum detection system.
Nested Quantum Error Correction Codes
Wang, Zhuo; Fan, Hen; Vedral, Vlatko
2009-01-01
The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.
Dr. Grace Zhang
2000-01-01
Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Probabilistic quantum error correction
Fern, J; Fern, Jesse; Terilla, John
2002-01-01
There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Huiliang Cao
2016-01-01
Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
A measuring and correcting method about locus errors in robot welding
无
2001-01-01
When tubules regularly arranged are welded onto a bobbin by robot, the position and orientation of some tubules may be changed by such factors as thermal deformations and positioning errors etc. Which make it very difficult to weld automatically and continuously by the method of teaching and playing. In this paper, a kind of error measuring system is presented. By which the position and orientation errors of tubules relative to the teaching one can be measured. And, a method to correct the locus errors is also proposed, by which the moving locus planned via teaching points can be corrected in real time according to measured error parameters. So that, just by teaching one, all tubules on a bobbin could be welded automatically.
A variational method for correcting non-systematic errors in numerical weather prediction
SHAO AiMei; XI Shuang; QIU ChongJian
2009-01-01
A variational method based on previous numerical forecasts is developed to estimate and correct non-systematic component of numerical weather forecast error. In the method, it is assumed that the error is linearly dependent on some combination of the forecast fields, and three types of forecast combination are applied to identifying the forecasting error: 1) the forecasts at the ending time, 2) the combination of initial fields and the forecasts at the ending time, and 3) the combination of the fore-casts at the ending time and the tendency of the forecast. The Single Value Decomposition (SVD) of the covariance matrix between the forecast and forecasting error is used to obtain the inverse mapping from flow space to the error space during the training period. The background covariance matrix is hereby reduced to a simple diagonal matrix. The method is tested with a shallow-water equation model by introducing two different model errors. The results of error correction for 6, 24 and 48 h forecasts show that the method is effective for improving the quality of the forecast when the forecasting error obviously exceeds the analysis error and it is optimal when the third type of forecast combinations is applied.
A variational method for correcting non-systematic errors in numerical weather prediction
无
2009-01-01
A variational method based on previous numerical forecasts is developed to estimate and correct non-systematic component of numerical weather forecast error. In the method, it is assumed that the error is linearly dependent on some combination of the forecast fields, and three types of forecast combination are applied to identifying the forecasting error: 1) the forecasts at the ending time, 2) the combination of initial fields and the forecasts at the ending time, and 3) the combination of the forecasts at the ending time and the tendency of the forecast. The Single Value Decomposition (SVD) of the covariance matrix between the forecast and forecasting error is used to obtain the inverse mapping from flow space to the error space during the training period. The background covariance matrix is hereby reduced to a simple diagonal matrix. The method is tested with a shallow-water equation model by introducing two different model errors. The results of error correction for 6, 24 and 48 h forecasts show that the method is effective for improving the quality of the forecast when the forecasting error obviously exceeds the analysis error and it is optimal when the third type of forecast combinations is applied.
Analogue Correction Method of Errors by Combining Statistical and Dynamical Methods
REN Hongli; CHOU Jifan
2006-01-01
Based on the atmospheric analogy principle, the inverse problem that the information of historical analogue data is utilized to estimate model errors is put forward and a method of analogue correction of errors (ACE) of model is developed in this paper. The ACE can combine effectively statistical and dynamical methods, and need not change the current numerical prediction models. The new method not only adequately utilizes dynamical achievements but also can reasonably absorb the information of a great many analogues in historical data in order to reduce model errors and improve forecast skill.Furthermore, the ACE may identify specific historical data for the solution of the inverse problem in terms of the particularity of current forecast. The qualitative analyses show that the ACE is theoretically equivalent to the principle of the previous analogue-dynamical model, but need not rebuild the complicated analogue-deviation model, so has better feasibility and operational foreground. Moreover, under the ideal situations, when numerical models or historical analogues are perfect, the forecast of the ACE would transform into the forecast of dynamical or statistical method, respectively.
A power supply error correction method for single-ended digital audio class D amplifiers
Yu, Zeqi; Wang, Fengqin; Fan, Yangyu
2016-12-01
In single-ended digital audio class D amplifiers (CDAs), the errors caused by power supply noise in the power stages degrade the output performance seriously. In this article, a novel power supply error correction method is proposed. This method introduces the power supply noise of the power stage into the digital signal processing block and builds a power supply error corrector between the interpolation filter and the uniform-sampling pulse width modulation (UPWM) lineariser to pre-correct the power supply error in the single-ended digital audio CDA. The theoretical analysis and implementation of the method are also presented. To verify the effectiveness of the method, a two-channel single-ended digital audio CDA with different power supply error correction methods is designed, simulated, implemented and tested. The simulation and test results obtained show that the method can greatly reduce the error caused by the power supply noise with low hardware cost, and that the CDA with the proposed method can achieve a total harmonic distortion + noise (THD + N) of 0.058% for a -3 dBFS, 1 kHz input when a 55 V linear unregulated direct current (DC) power supply (with the -51 dBFS, 100 Hz power supply noise) is used in the power stages.
Position error correction in absolute surface measurement based on a multi-angle averaging method
Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin
2017-04-01
We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.
Sørensen, Stefan; Nielsen, Hans Ove
2002-01-01
In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10%...
1998-01-01
To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.
Error analysis of motion correction method for laser scanning of moving objects
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Analogue correction method of errors and its applicatim to numerical weather prediction
Gao Li; Ren Hong-Li; Li Jian-Ping; Chou Ji-Fan
2006-01-01
In this paper,an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP).The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model.Furthermore.in the ACE.the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors.The results of daily,decad and monthly prediction experiments On a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction,but is also better than that of the T63 model.
Error analysis of motion correction method for laser scanning of moving objects
S. Goel; Lohani, B.
2014-01-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or a...
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
Beam-Based Error Identification and Correction Methods for Particle Accelerators
AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas
2014-06-10
Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...
Structured methods for identifying and correcting potential human errors in space operations.
Nelson, W R; Haney, L N; Ostrom, L T; Richards, R E
1998-01-01
Human performance plays a significant role in the development and operation of any complex system, and human errors are significant contributors to degraded performance, incidents, and accidents for technologies as diverse as medical systems, commercial aircraft, offshore oil platforms, nuclear power plants, and space systems. To date, serious accidents attributed to human error have fortunately been rare in space operations. However, as flight rates go up and the duration of space missions increases, the accident rate could increase unless proactive action is taken to identity and correct potential human errors in space operations. The Idaho National Engineering and Environmental Laboratory (INEEL) has developed and applied structured methods of human error analysis to identify potential human errors, assess their effects on system performance, and develop strategies to prevent the errors or mitigate their consequences. These methods are being applied in NASA-sponsored programs to the domain of commercial aviation, focusing on airplane maintenance and air traffic management. The application of human error analysis to space operations could contribute to minimize the risks associated with human error in the design and operation of future space systems.
A simple method of correcting magnitudes for the errors introduced by atmospheric refraction
Kruszewski, A
2003-01-01
We show that the errors due to atmospheric refraction are present in the magnitudes determined with the Difference Images Analysis method. In case of single, unblended stars the size of the effect agrees with the theoretical prediction. But when the blending is strong, what is quite common in a dense field, then the effect of atmospheric refraction can be strongly amplified to the extend that some cases of apparently variable stars with largest amplitudes of variations are solely due to refraction. We present a simple method of correcting for this kind of errors.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk.
A method to compute SEU fault probabilities in memory arrays with error correction
Gercek, Gokhan
1994-01-01
With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.
无
2010-01-01
In order to solve the problems that the novel wide area differential method on the satellite clock and ephemeris relative correction (CERC) in the non-geostationary orbit satellite constellation, a virtual reference satellite (VRS) differential principle using relative correction of satellite ephemeris errors is proposed. It is referred to be as the VRS differential principle, and the elaboration is focused on the construction of pseudo-range errors of VRS. Through qualitative analysis, it can be found that the impact of the satellite’s clock and ephemeris errors on positioning can basically be removed and the users’ positioning errors are near zero. Through simulation analysis of the differential performance, it is verified that the differential method is universal in all kinds of satellite navigation systems with geostationary orbit (GEO) constellation, Medium orbit (MEO) constellation or hybrid orbit constellation, and it has insensitivity to abnormal aspects of a satellite ephemeris and clock. Moreover, the real time positioning accuracy of differential users can be maintained within several decimeters after the pseudo-range measurement noise is effectively weakened or eliminated.
Cai, Chenglin; Li, Xiaohui; Wu, Haitao
2010-12-01
In order to solve the problems that the novel wide area differential method on the satellite clock and ephemeris relative correction (CERC) in the non-geostationary orbit satellite constellation, a virtual reference satellite (VRS) differential principle using relative correction of satellite ephemeris errors is proposed. It is referred to be as the VRS differential principle, and the elaboration is focused on the construction of pseudo-range errors of VRS. Through qualitative analysis, it can be found that the impact of the satellite's clock and ephemeris errors on positioning can basically be removed and the users' positioning errors are near zero. Through simulation analysis of the differential performance, it is verified that the differential method is universal in all kinds of satellite navigation systems with geostationary orbit (GEO) constellation, Medium orbit (MEO) constellation or hybrid orbit constellation, and it has insensitivity to abnormal aspects of a satellite ephemeris and clock. Moreover, the real time positioning accuracy of differential users can be maintained within several decimeters after the pseudo-range measurement noise is effectively weakened or eliminated.
Predictor-based error correction method in short-term climate prediction
无
2008-01-01
In terms of the basic idea of combining dynamical and statistical methods in short-term climate prediction, a new prediction method of predictor-based error correction (PREC) is put forward in order to effectively use statistical experiences in dynamical prediction. Analyses show that the PREC can reasonably utilize the significant correlations between predictors and model prediction errors and correct prediction errors by establishing statistical prediction model. Besides, the PREC is further applied to the cross-validation experiments of dynamical seasonal prediction on the operational atmosphere-ocean coupled general circulation model of China Meteorological Administration/National Climate Center by selecting the sea surface temperature index in Ni(n)o3 region as the physical predictor that represents the prevailing ENSO-cycle mode of interannual variability in climate system. It is shown from the prediction results of summer mean circulation and total precipitation that the PREC can improve predictive skills to some extent. Thus the PREC provides a new approach for improving short-term climate prediction.
Experimental repetitive quantum error correction.
Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer
2011-05-27
The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.
A method for multiplex gene synthesis employing error correction based on expression.
Timothy H-C Hsiau
Full Text Available Our ability to engineer organisms with new biosynthetic pathways and genetic circuits is limited by the availability of protein characterization data and the cost of synthetic DNA. With new tools for reading and writing DNA, there are opportunities for scalable assays that more efficiently and cost effectively mine for biochemical protein characteristics. To that end, we have developed the Multiplex Library Synthesis and Expression Correction (MuLSEC method for rapid assembly, error correction, and expression characterization of many genes as a pooled library. This methodology enables gene synthesis from microarray-synthesized oligonucleotide pools with a one-pot technique, eliminating the need for robotic liquid handling. Post assembly, the gene library is subjected to an ampicillin based quality control selection, which serves as both an error correction step and a selection for proteins that are properly expressed and folded in E. coli. Next generation sequencing of post selection DNA enables quantitative analysis of gene expression characteristics. We demonstrate the feasibility of this approach by building and testing over 90 genes for empirical evidence of soluble expression. This technique reduces the problem of part characterization to multiplex oligonucleotide synthesis and deep sequencing, two technologies under extensive development with projected cost reduction.
A method for multiplex gene synthesis employing error correction based on expression.
Hsiau, Timothy H-C; Sukovich, David; Elms, Phillip; Prince, Robin N; Strittmatter, Tobias; Stritmatter, Tobias; Ruan, Paul; Curry, Bo; Anderson, Paige; Sampson, Jeff; Anderson, J Christopher
2015-01-01
Our ability to engineer organisms with new biosynthetic pathways and genetic circuits is limited by the availability of protein characterization data and the cost of synthetic DNA. With new tools for reading and writing DNA, there are opportunities for scalable assays that more efficiently and cost effectively mine for biochemical protein characteristics. To that end, we have developed the Multiplex Library Synthesis and Expression Correction (MuLSEC) method for rapid assembly, error correction, and expression characterization of many genes as a pooled library. This methodology enables gene synthesis from microarray-synthesized oligonucleotide pools with a one-pot technique, eliminating the need for robotic liquid handling. Post assembly, the gene library is subjected to an ampicillin based quality control selection, which serves as both an error correction step and a selection for proteins that are properly expressed and folded in E. coli. Next generation sequencing of post selection DNA enables quantitative analysis of gene expression characteristics. We demonstrate the feasibility of this approach by building and testing over 90 genes for empirical evidence of soluble expression. This technique reduces the problem of part characterization to multiplex oligonucleotide synthesis and deep sequencing, two technologies under extensive development with projected cost reduction.
Catalytic quantum error correction
Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu
2006-01-01
We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.
无
2001-01-01
This paper presents a method on non-linear correction of broadband LFMCW signal utilizing its relativenonlinear error. The deriving procedure and the results simulated by a computer and tested by a practical system arealso introduced. The method has two obvious advantages compared with the previous methods: (1) Correction has norelation with delay time td and sweep bandwidth B; (2) The inherent non-linear error of VCO has no influence on thecorrection and its last results.
Experimental demonstration of topological error correction
2012-01-01
Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...
Feature Referenced Error Correction Apparatus.
A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)
Zhou, Hui; Kunz, Thomas; Schwartz, Howard
2011-01-01
Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.
Video Error Correction Using Steganography
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Correction for quadrature errors
Netterstrøm, A.; Christensen, Erik Lintz
1994-01-01
In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...
Experimental demonstration of topological error correction.
Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei
2012-02-22
Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.
Fade-resistant forward error correction method for free-space optical communications systems
Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.
2007-10-02
Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.
Method for detection and correction of errors in speech pitch period estimates
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
Binary Error Correcting Network Codes
Wang, Qiwen; Li, Shuo-Yen Robert
2011-01-01
We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.
Error correcting coding for OTN
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....
Quantum error correction for beginners.
Devitt, Simon J; Munro, William J; Nemoto, Kae
2013-07-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.
Abbas, M. M.; Shapiro, G. L.; Conrath, B. J.; Kunde, V. G.; Maguire, W. C.
1984-01-01
Thermal emission measurements of the earth's stratospheric limb from space platforms require an accurate knowledge of the observation angles for retrieval of temperature and constituent distributions. Without the use of expensive stabilizing systems, however, most observational instruments do not meet the required pointing accuracies, thus leading to large errors in the retrieval of atmospheric data. This paper describes a self-constituent method of correcting errors in pointing angles by using information contained in the observed spectrum. Numerical results based on temperature inversions of synthetic thermal emission spectra with assumed random errors in pointing angles are presented.
Saber Moghaddam Ranjbar AK
2013-10-01
Full Text Available Objectives: This study was designed to determine the level of awareness and attitude toward refractive correction methods in a randomly selected population in Mashhad, Iran. Materials and Methods: A random cluster sampling method was applied to choose 193 subjects aged 12 years and above from Mashhad population. A structured questionnaire with open-ended and closed-ended questions was designed to gather the participants' demographic data such as: gender, age, educational status and occupation, as well as their awareness and attitude toward refractive correction methods (Spectacles, Contact lenses and Refractive surgery. Results: In overall, 39% of the participants had a clear perception of 'ophthalmologist' and 'optometrist' terms. 80.3%, 87% and 71% of respondents had no information of contact lens application instead of spectacles, cosmetic contact lenses and contact lenses with both refractive correction and cosmetic properties, respectively. 82.5% of participants were not aware of the possibility of refractive surgery for improving their eyesight and decreasing their dependency on spectacles. Awareness about contact lenses and refractive surgery’s adverse effects were only 16% and 8%, respectively. Conclusion: Awareness and perception of refractive correction methods was low among the participants of this study. Although, ophthalmologists were the first source of consultation on sight impairments among respondents, a predominant percentage of subjects were not even aware of obvious differences between an ophthalmologist and an optometrist. These findings emphasize the necessity for proper public education on ophthalmic care and the available services, specially the new correction methods for improvement of quality of life.
A method for correcting aspect solution errors in ROSAT HRI observations of compact sources
Morse, Jon A.
1994-01-01
X-ray point sources observed with the ROSAT High Resolution Imager (HRI) often appear elongated over scales of approximately 5 sec-10 sec from the image core. This elongation has been attributed to errors in the attitude correction as the satellite is wobbled during the observations, and affects sources with both soft and hard X-ray spectra. In this paper, I report the results of an attempt to rid a high signal-to-noise observation of the soft X-ray point source HZ 43 of its characteristic elongation. I divided the observation into 181 separate images, each containing photons from only a small region on the detector through which the source passed during the satellite's wobble. By measuring the positions of the individual image centroids, I found clear evidence for systematic offsets from a common mean by up to approximately +/- 3 sec in both right ascension and declination as a function of phase in the satellite wobble. Shifting the subimages to a common center and then restacking them into a single image measurably improved the symmetry of the point-spread function. HRI observations are wobbled primarily to smooth out variations in the pixel-to-pixel sensitivity of the detector and also to extend the lifetime of the microchannel plates in the detector since these decay at a given location as a function of the number of photons detected. However, the elongations introduced by the aspect errors inhibit the identification of possible extended X-ray emission associated with sources such as pulsars and active galactic nuclei. In light of these results, I suggest that until the aspect errors are understood, observations of compact sources, where this effect may be important, should not be wobbled.
ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL
1994-01-01
Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)
Quantum Steganography and Quantum Error-Correction
Shaw, Bilal A
2010-01-01
In the current thesis we first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit e...
ERROR CORRECTION IN HIGH SPEED ARITHMETIC,
The errors due to a faulty high speed multiplier are shown to be iterative in nature. These errors are analyzed in various aspects. The arithmetic coding technique is suggested for the improvement of high speed multiplier reliability. Through a number theoretic investigation, a large class of arithmetic codes for single iterative error correction are developed. The codes are shown to have near-optimal rates and to render a simple decoding method. The implementation of these codes seems highly practical. (Author)
5 CFR 1601.34 - Error correction.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...
Mu, Y.; Sheng, G. M.; Sun, P. N.
2017-05-01
The technology of real-time fault diagnosis for nuclear power plants(NPP) has great significance to improve the safety and economy of reactor. The failure samples of nuclear power plants are difficult to obtain, and support vector machine is an effective algorithm for small sample problem. NPP is a very complex system, so in fact the type of NPP failure may occur very much. ECOC is constructed by the Hadamard error correction code, and the decoding method is Hamming distance method. The base models are established by lib-SVM algorithm. The result shows that this method can diagnose the faults of the NPP effectively.
Probabilistic error correction for RNA sequencing.
Le, Hai-Son; Schulz, Marcel H; McCauley, Brenna M; Hinman, Veronica F; Bar-Joseph, Ziv
2013-05-01
Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing. Here we present SEquencing Error CorrEction in Rna-seq data (SEECER), a hidden Markov Model (HMM)-based method, which is the first to successfully address these problems. SEECER efficiently learns hundreds of thousands of HMMs and uses these to correct sequencing errors. Using human RNA-Seq data, we show that SEECER greatly improves on previous methods in terms of quality of read alignment to the genome and assembly accuracy. To illustrate the usefulness of SEECER for de novo transcriptome studies, we generated new RNA-Seq data to study the development of the sea cucumber Parastichopus parvimensis. Our corrected assembled transcripts shed new light on two important stages in sea cucumber development. Comparison of the assembled transcripts to known transcripts in other species has also revealed novel transcripts that are unique to sea cucumber, some of which we have experimentally validated. Supporting website: http://sb.cs.cmu.edu/seecer/.
Raz, Ofir; Ben Yehezkel, Tuval
2015-01-01
The field of synthetic biology is fueled by steady advances in our ability to produce designer genetic material on demand. This relatively new technological capability stems from advancements in DNA construction biochemistry as well as supporting computational technologies such as tools for specifying large DNA libraries, as well as planning and optimizing their actual physical construction. In particular, the design, planning, and construction of user specified, combinatorial DNA libraries are of increasing interest. Here we present some of the computational tools we have built over the past decade to support the multidisciplinary task of constructing DNA molecules and their libraries. These technologies encompass computational methods for [1] planning and optimizing the construction of DNA molecules and libraries, [2] the utilization of existing natural or synthetic fragments, [3] identification of shared fragments, [4] planning primers and overlaps, [5] minimizing the number of assembly steps required, and (6) correcting erroneous constructs. Other computational technologies that are important in the overall process of DNA construction, such as [1] computational tools for efficient specification and intuitive visualization of large DNA libraries (which aid in debugging library design pre-construction) and [2] automated liquid handling robotic programming [Linshiz et al., Mol Syst Biol 4:191, 2008; Shabi et al., Syst Synth Biol 4:227-236, 2010], which aid in the construction process itself, have been omitted due to length limitations.
Philip D.RABINOWITZ; Zhiqiang ZHOU
2007-01-01
In recent years more and more multi-array logging tools, such as the array induction and the array lateralog, are applied in place of conventional logging tools resulting in increased resolution, better radial and vertical sounding capability and other features. Multi-array logging tools acquire several times more individual measurements than conventional logging tools. In addition to new information contained in these data, there is a certain redundancy among the measurements. The sum of the measurements actually composes a large matrix. Providing the measurements are error-free, the elements of this matrix show certain consistencies. Taking advantage of these consistencies, an innovative method is developed to detect and correct errors in the array resistivity logging tool raw measurements, and evaluate the quality of the data. The method can be described in several steps. First, data consistency patterns are identified based onthe physics of the measurements. Second, the measurements are compared against the consistency patterns for error and bad data detection. Third, the erroneous data are eliminated and the measurements are re-constructed according to the consistency patterns. Finally, the data quality is evaluated by comparing the raw measurements with the re-constructed measurements. The method can be applied to all array type logging tools, such as array induction tool and array resistivity tool. This paper describes the method and illustrates its application with the High Definition Lateral Log (HDLL, Baker Atlas) instrument. To demonstrate the efficiency of the method, several field examples are shown and discussed.
Norrozila Sulaiman
2014-10-01
Full Text Available Transmission video over ad hoc networks has become one of the most important and interesting subjects of study for researchers and programmers because of the strong relationship between video applications and frequent users of various mobile devices, such as laptops, PDAs, and mobile phones in all aspects of life. However, many challenges, such as packet loss, congestion (i.e., impairments at the network layer, multipath fading (i.e., impairments at the physical layer [1], and link failure, exist in transferring video over ad hoc networks; these challenges negatively affect the quality of the perceived video [2].This study has investigated video transfer over ad hoc networks. The main challenges of transferring video over ad hoc networks as well as types of errors that may occur during video transmission, various types of video mechanisms, error correction methods, and different Quality of Service (QoS parameters that affect the quality of the received video are also investigated.
Immediate error correction process following sleep deprivation
HSIEH, SHULAN; CHENG, I‐CHEN; TSAI, LING‐LING
2007-01-01
...) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event‐related potentials (ERPs...
Jerzy Roj
2016-08-01
Full Text Available The paper presents two methods of dynamic error correction applied to transducers used for the measurement of gas concentration. One of them is based on a parametric model of the transducer dynamics, and the second one uses the artificial neural network (ANN technique. This article describes research of the dynamic properties of the gas concentration measuring transducer with a typical sensor based on tin dioxide. Its response time is about 8 min, which may be not acceptable in many applications. On the basis of these studies, a parametric model of the transducer dynamics and an adequate correction algorithm has been developed. The results obtained in the research of the transducer were also used for learning and testing ANN, which were implemented in the dynamic correction task. Despite the simplicity of the used models, both methods allowed a significant reduction of the transducer’s response time. For the algorithm based on the parametric model the response time was shorter by approximately eight-fold (reduced up to 40–80 s, i.e., about 2–4 sample periods, whereas with the use of an ANN the output signal was practically fixed after a time equal to one sampling period, i.e., 20 s. In addition, the use of ANN has allowed reducing the impact of the transducer dynamic non-linearity on the correction effectiveness.
Open quantum systems and error correction
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Rank Modulation for Translocation Error Correction
Farnoud, Farzad; Milenkovic, Olgica
2012-01-01
We consider rank modulation codes for flash memories that allow for handling arbitrary charge drop errors. Unlike classical rank modulation codes used for correcting errors that manifest themselves as swaps of two adjacently ranked elements, the proposed \\emph{translocation rank codes} account for more general forms of errors that arise in storage systems. Translocations represent a natural extension of the notion of adjacent transpositions and as such may be analyzed using related concepts in combinatorics and rank modulation coding. Our results include tight bounds on the capacity of translocation rank codes, construction techniques for asymptotically good codes, as well as simple decoding methods for one class of structured codes. As part of our exposition, we also highlight the close connections between the new code family and permutations with short common subsequences, deletion and insertion error-correcting codes for permutations and permutation arrays.
Superdense Coding Interleaved with Forward Error Correction
Sadlier Ronald J.
2016-01-01
Full Text Available Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. We conclude that classical FEC with interleaving is a useful method to improve the performance in near-term demonstrations of superdense coding.
Correction of errors in power measurements
Pedersen, Knud Ole Helgesen
1998-01-01
Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....
Robust Quantum Error Correction via Convex Optimization
Kosut, R L; Lidar, D A
2007-01-01
Quantum error correction procedures have traditionally been developed for specific error models, and are not robust against uncertainty in the errors. Using a semidefinite program optimization approach we find high fidelity quantum error correction procedures which present robust encoding and recovery effective against significant uncertainty in the error system. We present numerical examples for 3, 5, and 7-qubit codes. Our approach requires as input a description of the error channel, which can be provided via quantum process tomography.
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Algorithmic Error Correction of Impedance Measuring Sensors
Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira
2009-01-01
This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177
Algorithmic Error Correction of Impedance Measuring Sensors
Vira Tyrsa
2009-12-01
Full Text Available This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance.
5 CFR 1604.6 - Error correction.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1604.6 Section 1604.6 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD UNIFORMED SERVICES ACCOUNTS § 1604.6 Error correction. (a) General rule. A service member's employing agency must correct the service member's...
Using Online Annotations to Support Error Correction and Corrective Feedback
Yeh, Shiou-Wen; Lo, Jia-Jiunn
2009-01-01
Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…
Improved Error Thresholds for Measurement-Free Error Correction
Crow, Daniel; Joynt, Robert; Saffman, M.
2016-09-01
Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.
Allodji, Rodrigue S; Thiébaut, Anne C M; Leuraud, Klervi; Rage, Estelle; Henry, Stéphane; Laurier, Dominique; Bénichou, Jacques
2012-12-30
A broad variety of methods for measurement error (ME) correction have been developed, but these methods have rarely been applied possibly because their ability to correct ME is poorly understood. We carried out a simulation study to assess the performance of three error-correction methods: two variants of regression calibration (the substitution method and the estimation calibration method) and the simulation extrapolation (SIMEX) method. Features of the simulated cohorts were borrowed from the French Uranium Miners' Cohort in which exposure to radon had been documented from 1946 to 1999. In the absence of ME correction, we observed a severe attenuation of the true effect of radon exposure, with a negative relative bias of the order of 60% on the excess relative risk of lung cancer death. In the main scenario considered, that is, when ME characteristics previously determined as most plausible from the French Uranium Miners' Cohort were used both to generate exposure data and to correct for ME at the analysis stage, all three error-correction methods showed a noticeable but partial reduction of the attenuation bias, with a slight advantage for the SIMEX method. However, the performance of the three correction methods highly depended on the accurate determination of the characteristics of ME. In particular, we encountered severe overestimation in some scenarios with the SIMEX method, and we observed lack of correction with the three methods in some other scenarios. For illustration, we also applied and compared the proposed methods on the real data set from the French Uranium Miners' Cohort study.
Optimal correction of independent and correlated errors
Jacobsen, Sol H.; Mintert, Florian
2013-01-01
We identify optimal quantum error correction codes for situations that do not admit perfect correction. We provide analytic n-qubit results for standard cases with correlated errors on multiple qubits and demonstrate significant improvements to the fidelity bounds and optimal entanglement decay profiles.
A Hybrid Approach for Correcting Grammatical Errors
Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…
Quantum Error Correction Beyond Completely Positive Maps
Shabani, A.; Lidar, D. A.
2006-01-01
By introducing an operator sum representation for arbitrary linear maps, we develop a generalized theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This theory of "linear quantum error correction" is applicable in cases where the standard and restrictive assumption of a factorized initial system-bath state does not apply.
MEI Jidan; ZHAI Chunpin; WANGYilin; HUI Junying
2011-01-01
The technology of underwater acoustic image measurement was a passive locating method with high precision in near field. To improve the precision of underwater acoustic image measurement, the influence of the depth scan error was analyzed and the correcti
Godart, J.; Korevaar, E. W.; Visser, R.; Wauben, D. J. L.; van t Veld, Aart
2011-01-01
TheCOMPASS system (IBADosimetry) is a quality assurance (QA) tool which reconstructs 3D doses inside a phantom or a patient CT. The dose is predicted according to the RT plan with a correction derived from 2D measurements of a matrix detector. This correction method is necessary since a direct recon
Robot learning and error correction
Friedman, L.
1977-01-01
A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.
The NASTRAN Error Correction Information System (ECIS)
Rosser, D. C., Jr.; Rogers, J. L., Jr.
1975-01-01
A data management procedure, called Error Correction Information System (ECIS), is described. The purpose of this system is to implement the rapid transmittal of error information between the NASTRAN Systems Management Office (NSMO) and the NASTRAN user community. The features of ECIS and its operational status are summarized. The mode of operation for ECIS is compared to the previous error correction procedures. It is shown how the user community can have access to error information much more rapidly when using ECIS. Flow charts and time tables characterize the convenience and time saving features of ECIS.
Galois Field Based Very Fast and Compact Error Correcting Technique
Alin Sindhu.A,
2014-01-01
Full Text Available As the technology is improving the memory devices are becoming larger, so powerful error correction codes are needed. Error correction codes are commonly used to protect memories from soft errors, which change the logical value of memory cells without damaging the circuit. These codes can correct a large number of errors, but generally require complex decoders. In order to avoid this decoding complexity, in this project it uses Euclidean geometry LDPC codes with one step majority decoding technique. This method detects words having error in the first iteration of the majority logic decoding process and reduces the decoding time by stopping the decoding process when no errors are detected as well as reduces the memory access time. And the result obtained through this technique also proves that it is an effective and compact error correcting technique.
Increasing sensing resolution with error correction.
Arrad, G; Vinkler, Y; Aharonov, D; Retzker, A
2014-04-18
The signal to noise ratio of quantum sensing protocols scales with the square root of the coherence time. Thus, increasing this time is a key goal in the field. By utilizing quantum error correction, we present a novel way of prolonging such coherence times beyond the fundamental limits of current techniques. We develop an implementable sensing protocol that incorporates error correction, and discuss the characteristics of these protocols in different noise and measurement scenarios. We examine the use of entangled versue untangled states, and error correction's reach of the Heisenberg limit. The effects of error correction on coherence times are calculated and we show that measurement precision can be enhanced for both one-directional and general noise.
Quantum Error Correction in the Zeno Regime
Erez, N; Reznik, B; Vaidman, L; Erez, Noam; Aharonov, Yakir; Reznik, Benni; Vaidman, Lev
2003-01-01
In order to reduce errors, error correction codes (ECCs) need to be implemented fast. They can correct the errors corresponding to the first few orders in the Taylor expansion of the Hamiltonian of the interaction with the environment. If implemented fast enough, the zeroth order error predominates and the dominant effect is of error prevention by measurement (Zeno Effect) rather than correction. In this ``Zeno Regime'', codes with less redundancy are sufficient for protection. We describe such a simple scheme, which uses two ``noiseless'' qubits to protect a large number, $n$, of information qubits from noise from the environment. The ``noisless qubits'' can be realized by treating them as logical qubits to be encoded by one of the previously introduced encoding schemes.
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Efficient Image Transmission Through Analog Error Correction
Liu, Yang; Li,; Xie, Kai
2011-01-01
This paper presents a new paradigm for image transmission through analog error correction codes. Conventional schemes rely on digitizing images through quantization (which inevitably causes significant bandwidth expansion) and transmitting binary bit-streams through digital error correction codes (which do not automatically differentiate the different levels of significance among the bits). To strike a better overall performance in terms of transmission efficiency and quality, we propose to use a single analog error correction code in lieu of digital quantization, digital code and digital modulation. The key is to get analog coding right. We show that this can be achieved by cleverly exploiting an elegant "butterfly" property of chaotic systems. Specifically, we demonstrate a tail-biting triple-branch baker's map code and its maximum-likelihood decoding algorithm. Simulations show that the proposed analog code can actually outperform digital turbo code, one of the best codes known to date. The results and fin...
Backtracking and error correction in DNA transcription
Voliotis, Margaritis; Cohen, Netta; Molina-Paris, Carmen; Liverpool, Tanniemola
2008-03-01
Genetic information is encoded in the nucleotide sequence of the DNA. This sequence contains the instruction code of the cell - determining protein structure and function, and hence cell function and fate. The viability and endurance of organisms crucially depend on the fidelity with which genetic information is transcribed/translated (during mRNA and protein production) and replicated (during DNA replication). However, thermodynamics introduces significant fluctuations which would incur massive error rates if efficient proofreading mechanisms were not in place. Here, we examine a putative mechanism for error correction during DNA transcription, which relies on backtracking of the RNA polymerase (RNAP). We develop an error correction model that incorporates RNAP translocation, backtracking pauses and mRNA cleavage. We calculate the error rate as a function of the relevant rates (translocation, cleavage, backtracking and polymerization) and show that the its theoretical limit is equivalent to that accomplished by a multiple-step kinetic proofreading mechanism.
Quality score based identification and correction of pyrosequencing errors.
Iyer, Shyamala; Bouzek, Heather; Deng, Wenjie; Larsen, Brendan; Casey, Eleanor; Mullins, James I
2013-01-01
Massively-parallel DNA sequencing using the 454/pyrosequencing platform allows in-depth probing of diverse sequence populations, such as within an HIV-1 infected individual. Analysis of this sequence data, however, remains challenging due to the shorter read lengths relative to that obtained by Sanger sequencing as well as errors introduced during DNA template amplification and during pyrosequencing. The ability to distinguish real variation from pyrosequencing errors with high sensitivity and specificity is crucial to interpreting sequence data. We introduce a new algorithm, CorQ (Correction through Quality), which utilizes the inherent base quality in a sequence-specific context to correct for homopolymer and non-homopolymer insertion and deletion (indel) errors. CorQ also takes uneven read mapping into account for correcting pyrosequencing miscall errors and it identifies and corrects carry forward errors. We tested the ability of CorQ to correctly call SNPs on a set of pyrosequences derived from ten viral genomes from an HIV-1 infected individual, as well as on six simulated pyrosequencing datasets generated using non-zero error rates to emulate errors introduced by PCR. When combined with the AmpliconNoise error correction method developed to remove ambiguities in signal intensities, we attained a 97% reduction in indel errors, a 98% reduction in carry forward errors, and >97% specificity of SNP detection. When compared to four other error correction methods, AmpliconNoise+CorQ performed at equal or higher SNP identification specificity, but the sensitivity of SNP detection was consistently higher (>98%) than other methods tested. This combined procedure will therefore permit examination of complex genetic populations with improved accuracy.
Errors and Correction of Precipitation Measurements in China
REN Zhihua; LI Mingqin
2007-01-01
In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.
Reflection error correction of gas turbine blade temperature
Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan
2016-03-01
Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.
The dynamic correction of collimation errors of CT slicing pictures
LIU Ya-xiong; Sekou Sing-are; LI Di-chen; LU Bing-heng
2006-01-01
To eliminate the motion artifacts of CT images caused by patient motions and other related errors,two kinds of correctors (A type and U type) are proposed to monitor the scanning process and correct the motion artifacts of the original images via reverse geometrical transformation such as reverse scaling,moving,rotating and offsetting.The results confirm that the correction method with any of the correctors can improve the accuracy and reliability of CT images,which facilitates in eliminating or decreasing the motion artifacts and correcting other static errors and image processing errors.This provides a foundation for the 3D reconstruction and accurate fabrication of the customized implants.
Survey of Radar Refraction Error Corrections
2016-11-01
Science Laboratory. “Data Systems Manual, Meteorology and Timing.” Prepared for White Sands Missile Range under contract DAAD07-76-0007, September, 1979...reflect the different meteorological layers within the troposphere. Atmospheric Modeling Parameters 5.1 Earth Model Refraction correction models use...Reference Atmosphere. Washington: U.S. Dept. of Commerce, National Bureau of Standards, 1959. Survey of Radar Refraction Error Corrections, RCC 266
Quantum Steganography and Quantum Error-Correction
Shaw, Bilal A.
2010-01-01
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be…
Consciousness-Raising, Error Correction and Proofreading
O'Brien, Josephine
2015-01-01
The paper discusses the impact of developing a consciousness-raising approach in error correction at the sentence level to improve students' proofreading ability. Learners of English in a foreign language environment often rely on translation as a composing tool and while this may act as a scaffold and provide some support, it frequently leads to…
Quantum Steganography and Quantum Error-Correction
Shaw, Bilal A.
2010-01-01
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be…
光电经纬仪大气折射误差修正方案%Error Correction Methods for Atmospheric Refraction of Electro-optical Theodolite
韩先平
2013-01-01
光学测量大气折射带来的误差已成为限制高精度测量的主要因素。针对实时和事后数据处理两部分分别提出了两种修正方法，并建立了折射误差修正模型及残差模型，经试验验证和误差比对分析，证明两种修正方法计算便捷，模型准确，均能提高实时和事后数据处理的精度。%High accuracy measurement is mainly limited by the error from atmospheric refraction. Accord⁃ing to data processing parts in real time and afterwards, two correction methods are proposed respectively. And refraction error correction and residual error models are built. By experimental validation and error comparing analysis, the simple calculation process and accurate models can be got by using these two methods, and can en⁃hance the accuracy of data processing in real time and afterwards.
Corte, Stefano; Cavedon, Valentina; Milanese, Chiara
2015-12-01
The aim of this study was to gain a better understanding of how the run pattern varies as a consequence to main error correction versus secondary error correction. Twenty-two university students were randomly assigned to one of two training-conditions: 'main error' (ME) and 'secondary error' (SE) correction. The rear-foot strike at touchdown was hypothesized as the 'main error', whereas an incorrect shoulder position (i.e., behind the base of support) as the 'secondary error'. In order to evaluate any changes in run pattern at the foot touchdown instant, the ankle, knee and hip joint angles, the height of toe and heel (with respect to the ground), and the horizontal distance from the heel to the projected center of mass on the ground were measured. After the training-intervention, the ME group showed a significant improvement in the run pattern at the foot touchdown instant in all kinematic parameters, whereas no significant changes were found in the SE group. The results support the hypothesis that the main error can have a greater influence on the movement patterns than a secondary error. Furthermore, the findings highlight that a correct diagnosis and the correction of the 'main error' are fundamental for greater run pattern improvement.
Error correction in adders using systematic subcodes.
Rao, T. R. N.
1972-01-01
A generalized theory is presented for the construction of a systematic subcode for a given AN code in such a way that error control properties of the AN code are preserved in this new code. The 'systematic weight' and 'systematic distance' functions in this new code depend not only on its number representation system but also on its addition structure. Finally, to illustrate this theory, a simple error-correcting adder organization using a systematic subcode of 29 N code is sketched in some detail.
Error Correction, Control Systems and Fuzzy Logic
Smith, Earl B.
2004-01-01
This paper will be a discussion on dealing with errors. While error correction and communication is important when dealing with spacecraft vehicles, the issue of control system design is also important. There will be certain commands that one wants a motion device to execute. An adequate control system will be necessary to make sure that the instruments and devices will receive the necessary commands. As it will be discussed later, the actual value will not always be equal to the intended or desired value. Hence, an adequate controller will be necessary so that the gap between the two values will be closed.
Langner, Andy Sven; Rossbach, Jörg; Tomás, Rogelio
2017-02-17
The Large Hadron Collider (LHC) is currently the world's largest particle accelerator with the highest center of mass energy in particle collision experiments. The control of the particle beam focusing is essential for the performance reach of such an accelerator. For the characterization of the focusing properties at the LHC, turn-by-turn beam position data is simultaneously recorded at numerous measurement devices (BPMs) along the accelerator, while an oscillation is excited on the beam. A novel analysis method for these measurements ($N$-BPM method) is developed here, which is based on a detailed analysis of systematic and statistical error sources and their correlations. It has been applied during the commissioning of the LHC for operation at an unprecedented energy of 6.5 TeV. In this process a stronger focusing than its design specifications has been achieved. This results in smaller transverse beam sizes at the collision points and allows for a higher rate of particle collisions. For the derivation of ...
Black Holes, Holography, and Quantum Error Correction
CERN. Geneva
2017-01-01
How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions? How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator? Why do such things happen only in gravitational theories? In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence. No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.
Quantum Secret Sharing with Error Correction
Aziz Mouzali; Fatiha Merazka; Damian Markham
2012-01-01
We investigate in this work a quantum error correction on a five-qubits graph state used for secret sharing through five noisy channels. We describe the procedure for the five, seven and nine qubits codes. It is known that the three codes always allow error recovery if only one among the sent qubits is disturbed in the transmitting channel. However, if two qubits and more are disturbed, then the correction will depend on the used code. We compare in this paper the three codes by computing the average fidelity between the sent secret and that measured by the receivers. We will treat the case where, at most, two qubits are affected in each one of five depolarizing channels.
Landauer's erasure, error correction and entanglement
Vedral, V.
1999-01-01
Classical and quantum error correction are presented in the form of Maxwell's demon and their efficiency analyzed from the thermodynamic point of view. We explain how Landauer's principle of information erasure applies to both cases. By then extending this principle to entanglement manipulations we rederive upper bounds on purification procedures thereby linking the ''no local increase of entanglement'' principle to the Second Law of thermodynamics.
Software for Correcting the Dynamic Error of Force Transducers
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
On the Design of Error-Correcting Ciphers
Mathur Chetan Nanjunda
2006-01-01
Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison
多频GNSS电离层折射误差修正方法%GNSS Multi-Frequency Correction Methods of Ionospheric Refraction Error
赵彦珍; 苏建峰
2016-01-01
Ionospheric refraction error is relatively large, single frequency refraction error of GPS ( Global Positioning System) and BDS ( BeiDou Navigation Satellite System) is analyzed. Then, different frequencies of GPS and BDS linear combination are researched in this paper. Three multi-frequency cor-rection methods such as dual-frequency of one order, triple-frequency of one order and triple-frequency of two orders are discussed. Ionospheric delay correction will also enlarge the observation noises, so the ob-servation noises of each correction method are analyzed and compared. Besides, a theoretical approach to select the best frequency combinations is proposed.%针对电离层折射误差较大的特点，分别对GPS( Global Positioning System)和BDS( BeiDou Navigation Satellite System)的单一频率的电离层折射误差进行了分析，并将不同频率进行线性组合，计算出组合后的电离层折射误差。此方法修正了双频一阶项、三频一阶项和三频二阶项电离层折射误差。由于电离层延迟修正的同时会放大观测噪声，为此分析比较了不同频率组合修正后的观测噪声，为最佳频率组合的选取提供了理论方法。
Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics
Sarovar, Mohan; Young, Kevin C.
2013-12-01
While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC.
Performance of multi level error correction in binary holographic memory
Hanan, Jay C.; Chao, Tien-Hsin; Reyes, George F.
2004-01-01
At the Optical Computing Lab in the Jet Propulsion Laboratory (JPL) a binary holographic data storage system was designed and tested with methods of recording and retrieving the binary information. Levels of error correction were introduced to the system including pixel averaging, thresholding, and parity checks. Errors were artificially introduced into the binary holographic data storage system and were monitored as a function of the defect area fraction, which showed a strong influence on data integrity. Average area fractions exceeding one quarter of the bit area caused unrecoverable errors. Efficient use of the available data density was discussed. .
Burst error correction extensions for large Reed Solomon codes
Owsley, P.
1990-01-01
Reed Solomon codes are powerful error correcting codes that include some of the best random and burst correcting codes currently known. It is well known that an (n,k) Reed Solomon code can correct up to (n - k)/2 errors. Many applications utilizing Reed Solomon codes require corrections of errors consisting primarily of bursts. In this paper, it is shown that the burst correcting ability of Reed Solomon codes can be increased beyond (n - k)/2 with an acceptable probability of miscorrect.
Error Correction for Index Coding with Side Information
Dau, Son Hoang; Chee, Yeow Meng
2011-01-01
A problem of index coding with side information was first considered by Y. Birk and T. Kol (IEEE INFOCOM, 1998). In the present work, a generalization of index coding scheme, where transmitted symbols are subject to errors, is studied. Error-correcting methods for such a scheme, and their parameters, are investigated. In particular, the following question is discussed: given the side information hypergraph of index coding scheme and the maximal number of erroneous symbols $\\delta$, what is the shortest length of a linear index code, such that every receiver is able to recover the required information? This question turns out to be a generalization of the problem of finding a shortest-length error-correcting code with a prescribed error-correcting capability in the classical coding theory. The Singleton bound and two other bounds, referred to as the $\\alpha$-bound and the $\\kappa$-bound, for the optimal length of a linear error-correcting index code (ECIC) are established. For large alphabets, a construction b...
How to Correct a Task Error: Task-Switch Effects Following Different Types of Error Correction
Steinhauser, Marco
2010-01-01
It has been proposed that switch costs in task switching reflect the strengthening of task-related associations and that strengthening is triggered by response execution. The present study tested the hypothesis that only task-related responses are able to trigger strengthening. Effects of task strengthening caused by error corrections were…
Error-correcting codes and phase transitions
Manin, Yuri I
2009-01-01
The theory of error-correcting codes is concerned with constructing codes that optimize simultaneously transmission rate and relative minimum distance. These conflicting requirements determine an asymptotic bound, which is a continuous curve in the space of parameters. The main goal of this paper is to relate the asymptotic bound to phase diagrams of quantum statistical mechanical systems. We first identify the code parameters with Hausdorff and von Neumann dimensions, by considering fractals consisting of infinite sequences of code words. We then construct operator algebras associated to individual codes. These are Toeplitz algebras with a time evolution for which the KMS state at critical temperature gives the Hausdorff measure on the corresponding fractal. We extend this construction to algebras associated to limit points of codes, with non-uniform multi-fractal measures, and to tensor products over varying parameters.
Uncertainty relations and approximate quantum error correction
Renes, Joseph M.
2016-09-01
The uncertainty principle can be understood as constraining the probability of winning a game in which Alice measures one of two conjugate observables, such as position or momentum, on a system provided by Bob, and he is to guess the outcome. Two variants are possible: either Alice tells Bob which observable she measured, or he has to furnish guesses for both cases. Here I derive uncertainty relations for both, formulated directly in terms of Bob's guessing probabilities. For the former these relate to the entanglement that can be recovered by action on Bob's system alone. This gives an explicit quantum circuit for approximate quantum error correction using the guessing measurements for "amplitude" and "phase" information, implicitly used in the recent construction of efficient quantum polar codes. I also find a relation on the guessing probabilities for the latter game, which has application to wave-particle duality relations.
Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure
2013-09-01
High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.
双振镜激光扫描的误差分析及校正方法%Error Analysis and Correction Methods of Dual Galvanometer Scanning
韩万鹏; 蒙文; 李云霞; 李大为; 周嘉
2011-01-01
The two-dimensional laser galvanometer technology is widely used for the characters of high speed, high precision and good controllability. Due to the intrinsic geometric distortion on the scanning field, and linear, nonlinear and other errors of the system, the scanning quality of the laser galvanometer system is seriously affected in different application fields. The geometric distortion and various errors are profoundly analyzed about the pre-object and post-object scanning models, and the pertinence compensation methods are advanced too. Some views and suggestions are put forward, the correction methods are also conceived on the basis of the theoretical analysis about various errors.%近年来，二维振镜激光扫描技术因其速度快、精度高、易于控制等特点，得到了越来越广泛的应用．但振镜扫描系统在扫描过程中，存在着固有扫描场的几何畸变，系统本身也存在着线性、非线性误差以及其他误差等，严重影响了振镜扫描系统在不同应用领域的扫描质量。从理论推导出发，较为详细地分析了物镜前扫描和物镜后扫描两种方式存在的几何畸变以及各种误差，并提出了针对性的补偿方法。在对各种误差理论分析的基础上，对校正思路进行了展望，提出了一些看法和建议。
Error correction maintains post-error adjustments after one night of total sleep deprivation.
Hsieh, Shulan; Tsai, Cheng-Yin; Tsai, Ling-Ling
2009-06-01
Previous behavioral and electrophysiologic evidence indicates that one night of total sleep deprivation (TSD) impairs error monitoring, including error detection, error correction, and posterror adjustments (PEAs). This study examined the hypothesis that error correction, manifesting as an overtly expressed self-generated performance feedback to errors, can effectively prevent TSD-induced impairment in the PEAs. Sixteen healthy right-handed adults (seven women and nine men) aged 19-23 years were instructed to respond to a target arrow flanked by four distracted arrows and to correct their errors immediately after committing errors. Task performance and electroencephalogram (EEG) data were collected after normal sleep (NS) and after one night of TSD in a counterbalanced repeated-measures design. With the demand of error correction, the participants maintained the same level of PEAs in reducing the error rate for trial N + 1 after TSD as after NS. Corrective behavior further affected the PEAs for trial N + 1 in the omission rate and response speed, which decreased and speeded up following corrected errors, particularly after TSD. These results show that error correction effectively maintains posterror reduction in both committed and omitted errors after TSD. A cerebral mechanism might be involved in the effect of error correction as EEG beta (17-24 Hz) activity was increased after erroneous responses compared to after correct responses. The practical application of error correction to increasing work safety, which can be jeopardized by repeated errors, is suggested for workers who are involved in monotonous but attention-demanding monitoring tasks.
Error Correction in Oral Classroom English Teaching
Jing, Huang; Xiaodong, Hao; Yu, Liu
2016-01-01
As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…
Matrix error correction for digital data
Dotson, Ronald S. (Inventor)
1992-01-01
A technique for digital data error detection and correction is disclosed which adds alignment and checksum bytes to three sides of a matrix (24) of digital data to be protected. This technique is particularly used for the recording and storage (16,18) of digital data on video tape medium (14). The digital data is treated as a matrix block (24). Checksum and alignment bytes are added (20) to the digital data before tape storage and stripped (22) therefrom after successful alignment checks and data validation. In particular, the first column may be used to provide alignment bytes of a predetermined value for each row. The last column provides row checksum bytes for the data in each row. The last row provides column check sum bytes for each column, excluding the column of alignment bytes. The data location at the intersection of the row of column checksum bytes and the column of row checksum bytes may be used as a checksum byte for either the row or column checksum bytes.
Polynomial theory of error correcting codes
Cancellieri, Giovanni
2015-01-01
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Joint Schemes for Physical Layer Security and Error Correction
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Surgical options for correction of refractive error following cataract surgery.
Abdelghany, Ahmed A; Alio, Jorge L
2014-01-01
Refractive errors are frequently found following cataract surgery and refractive lens exchange. Accurate biometric analysis, selection and calculation of the adequate intraocular lens (IOL) and modern techniques for cataract surgery all contribute to achieving the goal of cataract surgery as a refractive procedure with no refractive error. However, in spite of all these advances, residual refractive error still occasionally occurs after cataract surgery and laser in situ keratomileusis (LASIK) can be considered the most accurate method for its correction. Lens-based procedures, such as IOL exchange or piggyback lens implantation are also possible alternatives especially in cases with extreme ametropia, corneal abnormalities, or in situations where excimer laser is unavailable. In our review, we have found that piggyback IOL is safer and more accurate than IOL exchange. Our aim is to provide a review of the recent literature regarding target refraction and residual refractive error in cataract surgery.
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2017-07-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
A Matroidal Framework for Network-Error Correcting Codes
Prasad, K
2012-01-01
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. The current work attempts to establish a connection between matroid theory and network-error correcting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of network-error correcting codes to arrive at the definition of a matroidal error correcting network. An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error correcting network code if and only if there it is a matroidal error correcting network associated with a representable matroid. Therefore, constructing such network-error correcting codes implies ...
Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt
2015-12-01
Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Unitary Application of the Quantum Error Correction Codes
游波; 许可; 吴小华
2012-01-01
For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps： the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.
Quantum Error Correction and Higher-Rank Numerical Range
Choi, M D; Zyczkowski, K; Choi, Man-Duen; Kribs, David W.; Zyczkowski, Karol
2005-01-01
We solve the fundamental quantum error correction problem for bi-unitary channels on two-qubit Hilbert space. We construct qubit codes for such channels on arbitrary dimension Hilbert space, and identify correctable codes for Pauli-error models not detected by the stabilizer formalism. This is accomplished through an application of a new tool for error correction in quantum computing called the ``higher-rank numerical range''. We describe its basic properties and discuss possible further applications.
Joint Scheme for Physical Layer Error Correction and Security
Oluwayomi Adamo; Varanasi, M. R.
2011-01-01
We present a joint scheme that combines both error correction and security at the physical layer. In conventional communication systems, error correction is carried out at the physical layer while data security is performed at an upper layer. As a result, these steps are done as separate steps. However there has been a lot of interest in providing security at the physical layer. As a result, as opposed to the conventional system, we present a scheme that combines error correction and data sec...
基于人差错纠正能力的人因可靠性模型研究%Human Reliability Method Analysis Based on Human Error Correcting Ability
陈炉云; 张裕芳
2011-01-01
Based on the theory of time sequence and error correcting ability character of the human operator behaviors in man-machine system, combining the key performance shaping factor analysis, the human reliability analysis of the vessel chamber is investigated. By the time sequence parameter and error correcting parameter in the human errors analysis, the operator behaviors shaping model of man-machine system and human errors event tree are proposed. By the error correcting ability analysis, the quantitative model and allowance theory in human reliability analysis are discussed. In the end, with the monitoring task of the operation desk in the vessel chamber as an example, a human reliability analysis was conducted to quantitatively assess the mission reliability of the operator.%根据人-机系统中人的操作行为具有时序性和差错可纠正性的特点,结合船舶舱室行为形成主因子,开展船舶舱室人因可靠性研究.以人因失误的时序性和差错纠正参数为基础,建立人-机系统中操作者行为模式和人因失误事件树模型.通过对人的差错纠正能力的分析,开展人因可靠性量化模型纠正理论研究.最后,以船舶舱室操作台的监控任务人因可靠性为例进行量化计算,定量评估操作人员执行任务的可靠度.
Model Correction Factor Method
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
Coordinating sentence composition with error correction: A multilevel analysis
Van Waes, L.
2011-01-01
Full Text Available Error analysis involves detecting and correcting discrepancies between the 'text produced so far' (TPSF and the writer's mental representation of what the text should be. While many factors determine the choice of strategy, cognitive effort is a major contributor to this choice. This research shows how cognitive effort during error analysis affects strategy choice and success as measured by a series of online text production measures. We hypothesize that error correction with speech recognition software differs from error correction with keyboard for two reasons. Speech produces auditory commands and, consequently, different error types. The study reported on here measured the effects of (1 mode of presentation (auditory or visual-tactile, (2 error span, whether the error spans more or less than two characters, and (3 lexicality, whether the text error comprises an existing word. A multilevel analysis was conducted to take into account the hierarchical nature of these data. For each variable (interference reaction time, preparation time, production time, immediacy of error correction, and accuracy of error correction, multilevel regression models are presented. As such, we take into account possible disturbing person characteristics while testing the effect of the different conditions and error types at the sentence level.The results show that writers delay error correction more often when the TPSF is read out aloud first. The auditory property of speech seems to free resources for the primary task of writing, i.e. text production. Moreover, the results show that large errors in the TPSF require more cognitive effort, and are solved with a higher accuracy than small errors. The latter also holds for the correction of small errors that result in non-existing words.
大气折射误差快速修正方法研究%New method for the atmospheric refraction error correction in the range
韩先平
2016-01-01
Atmospheric refraction errors significantly affect the accuracy of the exterior traj ectory measurement in the range,especially for low elevation angle and distant targets.Based on characteristics of marine meteorological in the range and real test data of years,and the empirical formula of error correction for atmospheric refraction,we deduce specific refractive error correction model.Through a number of real experiment test,we conclude that the correction accuracy of the distance is less than 0.5 m,and the elevation angle less than 5 s.The model is simple,applied directly, and easy to calculate.The correction output matches the accuracy requirement of measurement of the exterior traj ectory in the range,and thus can be applied to real-time data processing.%大气折射误差对靶场外弹道测量精度影响非常大,特别是低仰角远距离目标。针对海上靶场气象特点和多年实测试验数据统计结果,基于大气折射指数经验公式,推导出了具体的电波折射误差修正模型。经多次试验任务检验,该误差修正模型测距修正精度小于0.5 m,仰角修正精度小于5 s,且模型简单,直接套用易于计算,修正后满足靶场外弹道精度要求,可应用于实时数据处理。
Coordinated joint motion control system with position error correction
Danko, George (Reno, NV)
2011-11-22
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Coordinated joint motion control system with position error correction
Danko, George L.
2016-04-05
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Second Language Learners' Beliefs about Grammar Instruction and Error Correction
Loewen, Shawn; Li, Shaofeng; Fei, Fei; Thompson, Amy; Nakatsukasa, Kimi; Ahn, Seongmee; Chen, Xiaoqing
2009-01-01
Learner beliefs are an important individual difference in second language (L2) learning. Furthermore, an ongoing debate surrounds the role of grammar instruction and error correction in the L2 classroom. Therefore, this study investigated the beliefs of L2 learners regarding the controversial role of grammar instruction and error correction. A…
A Classroom Research Study on Oral Error Correction
Coskun, Abdullah
2010-01-01
This study has the main objective to present the findings of a small-scale classroom research carried out to collect data about my spoken error correction behaviors by means of self-observation. With this study, I aimed to analyze how and which spoken errors I corrected during a specific activity in a beginner's class. I used Lyster and Ranta's…
A Support System for Error Correction Questions in Programming Education
Hachisu, Yoshinari; Yoshida, Atsushi
2014-01-01
For supporting the education of debugging skills, we propose a system for generating error correction questions of programs and checking the correctness. The system generates HTML files for answering questions and CGI programs for checking answers. Learners read and answer questions on Web browsers. For management of error injection, we have…
Raptor Codes for Use in Opportunistic Error Correction
Zijnge, T.; Schiphorst, R.; Shao, X.; Slump, C.H.; Goseling, Jasper; Weber, Jos H.
2010-01-01
In this paper a Raptor code is developed and applied in an opportunistic error correction (OEC) layer for Coded OFDM systems. Opportunistic error correction [3] tries to recover information when it is available with the least effort. This is achieved by using Fountain codes in a COFDM system, which
Fault-tolerant error correction with the gauge color code
Brown, Benjamin J.; Nickerson, Naomi H.; Browne, Dan E.
2016-07-01
The constituent parts of a quantum computer are inherently vulnerable to errors. To this end, we have developed quantum error-correcting codes to protect quantum information from noise. However, discovering codes that are capable of a universal set of computational operations with the minimal cost in quantum resources remains an important and ongoing challenge. One proposal of significant recent interest is the gauge color code. Notably, this code may offer a reduced resource cost over other well-studied fault-tolerant architectures by using a new method, known as gauge fixing, for performing the non-Clifford operations that are essential for universal quantum computation. Here we examine the gauge color code when it is subject to noise. Specifically, we make use of single-shot error correction to develop a simple decoding algorithm for the gauge color code, and we numerically analyse its performance. Remarkably, we find threshold error rates comparable to those of other leading proposals. Our results thus provide the first steps of a comparative study between the gauge color code and other promising computational architectures.
Fault-tolerant error correction with the gauge color code.
Brown, Benjamin J; Nickerson, Naomi H; Browne, Dan E
2016-07-29
The constituent parts of a quantum computer are inherently vulnerable to errors. To this end, we have developed quantum error-correcting codes to protect quantum information from noise. However, discovering codes that are capable of a universal set of computational operations with the minimal cost in quantum resources remains an important and ongoing challenge. One proposal of significant recent interest is the gauge color code. Notably, this code may offer a reduced resource cost over other well-studied fault-tolerant architectures by using a new method, known as gauge fixing, for performing the non-Clifford operations that are essential for universal quantum computation. Here we examine the gauge color code when it is subject to noise. Specifically, we make use of single-shot error correction to develop a simple decoding algorithm for the gauge color code, and we numerically analyse its performance. Remarkably, we find threshold error rates comparable to those of other leading proposals. Our results thus provide the first steps of a comparative study between the gauge color code and other promising computational architectures.
Long Burst Error Correcting Codes Project
National Aeronautics and Space Administration — Long burst error mitigation is an enabling technology for the use of Ka band for high rate commercial and government users. Multiple NASA, government, and commercial...
Environmental boundaries as an error correction mechanism for grid cells.
Hardcastle, Kiah; Ganguli, Surya; Giocomo, Lisa M
2015-05-06
Medial entorhinal grid cells fire in periodic, hexagonally patterned locations and are proposed to support path-integration-based navigation. The recursive nature of path integration results in accumulating error and, without a corrective mechanism, a breakdown in the calculation of location. The observed long-term stability of grid patterns necessitates that the system either performs highly precise internal path integration or implements an external landmark-based error correction mechanism. To distinguish these possibilities, we examined grid cells in behaving rodents as they made long trajectories across an open arena. We found that error accumulates relative to time and distance traveled since the animal last encountered a boundary. This error reflects coherent drift in the grid pattern. Further, interactions with boundaries yield direction-dependent error correction, suggesting that border cells serve as a neural substrate for error correction. These observations, combined with simulations of an attractor network grid cell model, demonstrate that landmarks are crucial to grid stability.
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
A Morphographemic Model for Error Correction in Nonconcatenative Strings
Bowden, T; Bowden, Tanya; Kiraz, George Anton
1995-01-01
This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic error problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, phonetic syncopation and morphographemic idiosyncrasies, in addition to Damerau errors. A complementary correction strategy for morphologically sound but morphosyntactically ill-formed words is outlined.
Error Correction Using Long Context Match for Smartphone Speech Recognition
梁, 原; Liang, Yuan; 岩野, 公司; Iwano, Koji; 篠田, 浩一; Shinoda, Koichi
2015-01-01
Most error correction interfaces for speech recognition applications on smartphones require the user to first mark an error region and choose the correct word from a candidate list. We propose a simple multimodal interface to make the process more efficient. We develop Long Context Match (LCM) to get candidates that complement the conventional word confusion network (WCN). Assuming that not only the preceding words but also the succeeding words of the error region are validated by users, we u...
Comments on "A New Random-Error-Correction Code"
Paaske, Erik
1979-01-01
This correspondence investigates the error propagation properties of six different systems using a (12, 6) systematic double-error-correcting convolutional encoder and a one-step majority-logic feedback decoder. For the generally accepted assumption that channel errors are much more likely to occur...
Energy efficiency of error correction on wireless systems
Havinga, Paul J.M.
1999-01-01
Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.
A Comparison of Error Correction Procedures on Word Reading
Syrek, Andrea L.; Hixson, Micheal D.; Jacob, Susan; Morgan, Sandra
2007-01-01
The effectiveness and efficiency of two error correction procedures on word reading were compared. Three students with below average reading skills and one student with average reading skills were provided with weekly instruction on sets of 20 unknown words. Students' errors during instruction were followed by either word supply error correction…
Energy efficiency of error correction on wireless systems
Havinga, Paul J.M.
1999-01-01
Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
Development of a Drosophila cell-based error correction assay
Jeffrey D. Salemi
2013-07-01
Full Text Available Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT interactions before anaphase onset results in chromosomal instability (CIN, which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5. Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due to the activity of kinesin-14 (Ncd when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC. Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and
Development of a Drosophila cell-based error correction assay.
Salemi, Jeffrey D; McGilvray, Philip T; Maresca, Thomas J
2013-01-01
Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT) interactions before anaphase onset results in chromosomal instability (CIN), which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F) is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5). Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due, in part, to kinesin-14 (Ncd) activity when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC). Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and CIN.
Error Correction ——Is it necessary or not？
周一书
2012-01-01
One of the most time-consuming and frustrating things for language teachers is that they correct students＇ errors time and again and, it seems that no immediate effect have on the students＇ language learning. In my article, I first review the basic prevailing theories about errors treatment, and then by discussing the significance of language errors, I recommend some policies for teachers to have towards errors, hoping that they will be of some held to language teaching.
Reed Solomon error correction for the space telescope
Whitaker, S.; Cameron, K.; Canaris, J.; Vincent, P.; Liu, N.; Owsley, P.
1990-01-01
This paper reports a single 8.2mm by 8.4mm, 200,000 transistor CMOS chip implementation of the Reed Solomon code required by the Space Telescope. The chip features a 10 MHz sustained byte rate independent of error pattern. The 1.6 micron CMOS integrated circuit has complete decoder and encoder functions and uses a single data/system clock. Block lengths up to 255 bytes as well as shortened codes are supported with no external buffering. Erasure corrections as well as random error corrections are supported with programmable corrections of up to 10 symbol errors. Correction time is independent of error pattern and the number of errors.
Topological quantum error correction in the Kitaev honeycomb model
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Detecting and Correcting Speech Rhythm Errors
Yurtbasi, Metin
2015-01-01
Every language has its own rhythm. Unlike many other languages in the world, English depends on the correct pronunciation of stressed and unstressed or weakened syllables recurring in the same phrase or sentence. Mastering the rhythm of English makes speaking more effective. Experiments have shown that we tend to hear speech as more rhythmical…
Quantum Error-Correcting Codes over Mixed Alphabets
Wang, Zhuo; Fan, Heng; Oh, C H
2012-01-01
Errors are inevitable during all kinds quantum informational tasks and quantum error-correcting codes (QECCs) are powerful tools to fight various quantum noises. For standard QECCs physical systems have the same number of energy levels. Here we shall propose QECCs over mixed alphabets, i.e., physical systems of different dimensions, and investigate their constructions as well as their quantum Singleton bound. We propose two kinds of constructions: a graphical construction based a graph-theoretical object composite coding clique and a projection-based construction. We illustrate our ideas using two alphabets by finding out some 1-error correcting or detecting codes over mixed alphabets, e.g., optimal $((6,8,3))_{4^52^1}$, $((6,4,3))_{4^42^2}$ and $((5,16,2))_{4^32^2}$ code and suboptimal $((5,9,2))_{3^42^1}$ code. Our methods also shed light to the constructions of standard QECCs, e.g., the construction of the optimal $((6,16,3))_4$ code as well as the optimal $((2n+3,p^{2n+1},2))_{p}$ codes with $p=4k$.
VLSI architectures for modern error-correcting codes
Zhang, Xinmiao
2015-01-01
Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI
Diab, Rula L.
2006-01-01
This article discusses what EFL instructors and their students like and dislike about error correction and paper marking and discusses what this means for classroom teaching. The article lists the benefits and drawbacks of error correction for students' writing and argues for the need to look at preferred methods for both teachers and students. It…
Opportunistic error correction for mimo-ofdm: from theory to practice
Shao, Xiaoying; Slump, Cornelis H.
2013-01-01
Opportunistic error correction based on fountain codes is especially designed for the MIMOOFDM system. The key point of this new method is the tradeoff between the code rate of error correcting codes and the number of sub-carriers in the channel vector to be discarded. By transmitting one fountain-e
Dense Error Correction via L1-Minimization
Wright, John
2008-01-01
This paper studies the problem of recovering a non-negative sparse signal $\\x \\in \\Re^n$ from highly corrupted linear measurements $\\y = A\\x + \\e \\in \\Re^m$, where $\\e$ is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries $A$, any non-negative, sufficiently sparse signal $\\x$ can be recovered by solving an $\\ell^1$-minimization problem: $\\min \\|\\x\\|_1 + \\|\\e\\|_1 \\quad {subject to} \\quad \\y = A\\x + \\e.$ More precisely, if the fraction $\\rho$ of errors is bounded away from one and the support of $\\x$ grows sublinearly in the dimension $m$ of the observation, then as $m$ goes to infinity, the above $\\ell^1$-minimization succeeds for all signals $\\x$ and almost all sign-and-support patterns of $\\e$. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corr...
Error Correction for Tandem Data-Transmission Paths
Posner, E. C.; Rubin, A. L.
1985-01-01
Mathematical analysis for digital data transmission calculates optimum number of binary error-correcting repeaters to install in given number of wideband channel links. Asymptotic results compared to computed numerical results.
New orbit correction method uniting global and local orbit corrections
Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.
2006-01-01
A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.
Motor control: correcting errors and learning from mistakes.
Miall, Chris
2010-07-27
How do we learn from errors during complex movement tasks with redundancy? A new study shows that ambiguous mistakes in bimanual movements are corrected by the non-dominant hand, and responsibility for the error is assumed to fall to the effector with a recent history of poor performance.
Comparing Error Correction Procedures for Children Diagnosed with Autism
Townley-Cochran, Donna; Leaf, Justin B.; Leaf, Ronald; Taubman, Mitchell; McEachin, John
2017-01-01
The purpose of this study was to examine the effectiveness of two error correction (EC) procedures: modeling alone and the use of an error statement plus modeling. Utilizing an alternating treatments design nested into a multiple baseline design across participants, we sought to evaluate and compare the effects of these two EC procedures used to…
Testing Error Correcting Codes by Multicanonical Sampling of Rare Events
Iba, Yukito; Hukushima, Koji
2007-01-01
The idea of rare event sampling is applied to the estimation of the performance of error-correcting codes. The essence of the idea is importance sampling of the pattern of noises in the channel by Multicanonical Monte Carlo, which enables efficient estimation of tails of the distribution of bit error rate. The idea is successfully tested with a convolutional code.
Iterative Phase Optimization of Elementary Quantum Error Correcting Codes
Müller, M.; Rivas, A.; Martínez, E. A.; Nigg, D.; Schindler, P.; Monz, T.; Blatt, R.; Martin-Delgado, M. A.
2016-07-01
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Treelet Probabilities for HPSG Parsing and Error Correction
Ivanova, Angelina; van Noord, Gerardus; Calzolari, Nicoletta; al, et
2014-01-01
Most state-of-the-art parsers take an approach to produce an analysis for any input despite errors. However, small grammatical mistakes in a sentence often cause parser to fail to build a correct syntactic tree. Applications that can identify and correct mistakes during parsing are particularly inte
Correct Me to Tears: The Importance of Knowing the Learner before Correcting Errors.
Parrino, Angela
An advocate of the Natural Approach to second language instruction discusses problems associated with error correction, highlighted in her personal experience when she visited Italy to explore her heritage and use her Italian language skills. After requesting to have her errors corrected by a native-speaking friend, the visitor experienced great…
Homography-Based Correction of Positional Errors in MRT Survey
Nayak, Arvind; Shankar, N Udaya
2009-01-01
The Mauritius Radio Telescope (MRT) images show systematics in the positional errors of sources when compared to source positions in the Molonglo Reference Catalogue (MRC). We have applied two-dimensional homography to correct positional errors in the image domain and avoid re-processing the visibility data. Positions of bright (above 15-$\\sigma$) sources, common to MRT and MRC catalogues, are used to set up an over-determined system to solve for the 2-D homography matrix. After correction, the errors are found to be within 10% of the beamwidth for these bright sources and the systematics are eliminated from the images.
Matsumoto, Ryutaroh; Uyematsu, Tomohiko
1999-01-01
Comment: 10 pages, LaTeX2e. To appear in IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences (ISSN 0916-8508), vol. E83-A, no. 10, Oct. 2000. Revision on Dec. 14, 1999: Added a note on a systematic construction of quantum codes with efficient decoding algorithms. Revision on June 26, 2000: Corrected lots of errors, and added a review on the overall error correction process. No original materials were added
Ising Spin-Based Error Correcting Private-Key Cryptosystems
ZHENG Dong; ZHENG Yan-fei; FAN Wu-ying
2006-01-01
Ising spin system has been shown to provide a new class of error-correction code and can be used to construct public-key cryptosystems by making use of statistical mechanics. The relation between Ising spin systems and private-key cryptosystems are investigated. Two private-key systems are based on two predetermined randomly constructed sparse matrices and rely on exploiting physical properties of the Mackay-Neal (MN) low-density parity-check (LDPC) error-correcting codes are proposed. One is error correcting private-key system, which is powerful to combat ciphertext errors in communications and computer systems. The other is a private-key system with authentication.
Three-Dimensional Turbulent RANS Adjoint-Based Error Correction
Park, Michael A.
2003-01-01
Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.
Error Correction for the JLEIC Ion Collider Ring
Wei, Guohui [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Morozov, Vasiliy [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Lin, Fanglei [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Zhang, Yuhong [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Pilat, Fulvia C. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Nosochkov, Yuri [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wang, Min-Huey [SLAC National Accelerator Lab., Menlo Park, CA (United States)
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, and chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.
An investigation of error correcting techniques for OMV and AXAF
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Combining Trigram and Automatic Weight Distribution in Chinese Spelling Error Correction
李建华; 王晓龙
2002-01-01
The researches on spelling correction aiming at detecting errors in texts tendto focus on context-sensitive spelling error correction, which is more difficult than traditionalisolated-word error correction. A novel and efficient algorithm for the system of Chinese spellingerror correction, CInsunSpell, is presented. In this system, the work of correction includes twoparts: checking phase and correcting phase. At the first phase, a Trigram algorithm within onefixed-size window is designed to locate potential errors in local area. The second phase employsa new method of automatically and dynamically distributing weights among the characters inthe confusion set as well as in the Bayesian language model. The tactics used above exhibitsgood performances.
Detecting and correcting hard errors in a memory array
Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.
2015-11-19
Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.
An Analysis to English Learners Errors at Vocational Colleges And How to Correct Them
李梅; 余莲君
2012-01-01
College students make errors in their English learning even though they have learned this language for more than six years.How do students make errors and what kind of errors do they make? It is very important to know about them in order that teachers can take appropriate methods according to different kind of errors.Thus can we stimulate students to learn the English language better and apply it correctly in daily life.
Variations on a theme: Songbirds, variability, and sensorimotor error correction.
Kuebrich, B D; Sober, S J
2015-06-18
Songbirds provide a powerful animal model for investigating how the brain uses sensory feedback to correct behavioral errors. Here, we review a recent study in which we used online manipulations of auditory feedback to quantify the relationship between sensory error size, motor variability, and vocal plasticity. We found that although inducing small auditory errors evoked relatively large compensatory changes in behavior, as error size increased the magnitude of error correction declined. Furthermore, when we induced large errors such that auditory signals no longer overlapped with the baseline distribution of feedback, the magnitude of error correction approached zero. This pattern suggests a simple and robust strategy for the brain to maintain the accuracy of learned behaviors by evaluating sensory signals relative to the previously experienced distribution of feedback. Drawing from recent studies of auditory neurophysiology and song discrimination, we then speculate as to the mechanistic underpinnings of the results obtained in our behavioral experiments. Finally, we review how our own and other studies exploit the strengths of the songbird system, both in the specific context of vocal systems and more generally as a model of the neural control of complex behavior.
DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION
E. B. Romanova
2016-03-01
Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different
CRC Look-up Table Optimization for Single-Bit Error Correction
无
2007-01-01
Many communication systems use the cyclic redundancy code (CRC) technique for protecting key data fields from transmission errors by enabling both single-bit error correction and multi-bit error detection. The look-up table design is very important for the error-correction implementation. This paper presents a CRC look-up table optimization method for single-bit error correction. The optimization method minimizes the address length of the pre-designed look-up table while satisfying certain restrictions. The circuit implementation is also presented to show the feasibility of the method in the application specific integrated circuit design. An application of the optimization method in the generic framing procedure protocol is implemented using field programmable gate arrays. The result shows that the memory address length has been minimized, while keeping a very simple circuit implementation.
The role of error correction in communicative second language teaching
H. Ludolph Botha
2013-02-01
Full Text Available According to recent rese~rch, correction of errors in both oral and written communication does little to a~d language proficiency in the second language. In the Natural Approach of Krashen and Terrell the emphasis is on the acquisition of informal communication. Because the message and the understanding of the message remain of utmost importance, error correction is avoided. In Suggestopedia where the focus is also on communication, error correction is avoided as it inhibits the pupil. Onlangse navorsing het getoon dat die verbetering van foute in beide mondelinge en skriftelike kommunikasie min bydra tot beter taalvaardigheid in die tweede taal. In die Natural Approach van Krashen en Terrell val die klem op die verwerwing van informele kommunikasie, want die boodskap en die verstaan daarvan bly verreweg die belangrikste; die verbetering van foute word vermy. In Suggestopedagogiek, waar die klem ook op kommunikasie val, word die verbetering van foute vermy omdat dit die leerling beperk.
Quantum error correction assisted by two-way noisy communication.
Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C H
2014-11-26
Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1
Correcting systematic errors in high-sensitivity deuteron polarization measurements
Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.
Experimental Realization of Continuous-Variable Quantum Error Correction Codes
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Andersen, Ulrik Lund
Quantum information processing relies on the robust and faithful transmission, storage and manipulation of quantum information. However, since different decoherent processes are inherent to any realistic implementation, the future of quantum information systems strongly relies on the ability to d...... to detect and perform error code correction and noise filtration. We present two different schemes to eliminate erasure errors and channel excess noise in continuous-variable quantum channels....
Error Correction and Long Run Equilibrium in Continuous Time
1988-01-01
This paper deals with error correction models (ECM's) and cointegrated systems that are formulated in continuous time. Problems of representation, identification, estimation and time aggregation are discussed. It is shown that every ECM in continuous time has a discrete time equivalent model in ECM format. Moreover, both models may be written as triangular systems with stationary errors. This formulation simplifies both the continuous and the discrete time ECM representations and it helps to ...
Error suppression and error correction in adiabatic quantum computation I: techniques and challenges
Young, Kevin C.; Sarovar, Mohan; Blume-Kohout, Robin
2013-01-01
Adiabatic quantum computation (AQC) is known to possess some intrinsic robustness, though it is likely that some form of error correction will be necessary for large scale computations. Error handling routines developed for circuit-model quantum computation do not transfer easily to the AQC model since these routines typically require high-quality quantum gates, a resource not generally allowed in AQC. There are two main techniques known to suppress errors during an AQC implementation: energy...
掷标枪中错误动作的生物力学分析%The Javelin Movement Errors in Biomechanical Analysis and Correction Method
穆峰
2012-01-01
掷标枪是由肩上持枪通过助跑衔接投掷步获得动量,通过爆发式的最后用力作用于标枪的纵轴将标枪经肩上投出的过程.这个过程在运动员高速的身体运动中完成,错误动作出现的可能性比较大.文章运用生物力学原理分析研究掷标枪过程中易犯的错误动作,找到错误动作形成的根源,并制定有效地改进办法,对于今后掷标枪技术的教学和训练具有一定指导性的作用.%Javelin throwing is the process of getting the power from the run-ups by holding the javelin by the shoulder and then throwing javelin over the shoulder according the power by the vertical axis.The process is finished in the rapid motion,so the errors are often possible.The paper is based on the biomechanical analysis to study the error actions in the process of throwing javelin to find out the reasons and suggests the efficient methods,which provides the guidance for the javelin teaching and training.
Quantum Metrology Enhanced by Repetitive Quantum Error Correction
Unden, Thomas; Balasubramanian, Priya; Louzon, Daniel; Vinkler, Yuval; Plenio, Martin B.; Markham, Matthew; Twitchen, Daniel; Stacey, Alastair; Lovchinsky, Igor; Sushkov, Alexander O.; Lukin, Mikhail D.; Retzker, Alex; Naydenov, Boris; McGuinness, Liam P.; Jelezko, Fedor
2016-06-01
We experimentally demonstrate the protection of a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the time scales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multiqubit systems provide for quantum sensing and offers an important complement to quantum control techniques.
Quantum Metrology Enhanced by Repetitive Quantum Error Correction.
Unden, Thomas; Balasubramanian, Priya; Louzon, Daniel; Vinkler, Yuval; Plenio, Martin B; Markham, Matthew; Twitchen, Daniel; Stacey, Alastair; Lovchinsky, Igor; Sushkov, Alexander O; Lukin, Mikhail D; Retzker, Alex; Naydenov, Boris; McGuinness, Liam P; Jelezko, Fedor
2016-06-10
We experimentally demonstrate the protection of a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the time scales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multiqubit systems provide for quantum sensing and offers an important complement to quantum control techniques.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
Guth, Larry; Lubotzky, Alexander
2014-08-01
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance nɛ. Their rate is evaluated via Euler characteristic arguments and their distance using {Z}_2-systolic geometry. This construction answers a question of Zémor ["On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction," in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259-273], who asked whether homological codes with such parameters could exist at all.
Quantum Information Processing and Quantum Error Correction An Engineering Approach
Djordjevic, Ivan
2012-01-01
Quantum Information Processing and Quantum Error Correction is a self-contained, tutorial-based introduction to quantum information, quantum computation, and quantum error-correction. Assuming no knowledge of quantum mechanics and written at an intuitive level suitable for the engineer, the book gives all the essential principles needed to design and implement quantum electronic and photonic circuits. Numerous examples from a wide area of application are given to show how the principles can be implemented in practice. This book is ideal for the electronics, photonics and computer engineer
Dailin Wang
2015-02-01
Full Text Available In this paper, we attempt to reduce the discrepancies between the modeled and observed tsunami arrival times. We treat the ocean as a homogenous fluid, ignoring stratification due to compressibility and variations of temperature and salinity. The phase speed of surface gravity waves is reduced for a compressible fluid compared to that of an incompressible fluid. At the shallow water limit, the reduction in speed is about 0.86% at a water depth of 4000 m. We propose a simple ocean depth- correction method to implement the reduction in wave speed in the framework of shallow water equations of an incompressible fluid: 1 we define an effective ocean depth such that the reduction of the phase speed due to compressibility of seawater is exactly matched by the decrease in water depth (about 2.5% reduction at ocean depth of 6000 m and less than 0.1% at 200 m; 2 this effective depth is treated as if it were the real ocean depth. Implementation of the method only requires replacing the ocean bathymetry with the effective bathymetry so there is no need to modify existing tsunami codes and thus there is no additional computational cost. We interpret the depth-correction method as a bulk-parameterization of the combined effects of physical dispersion, compressibility, stratification, and elasticity of the earth on wave speed. We applied this method to the 2010 Chile and 2011 Tohoku basin-crossing tsunamis. For the 2010 Chile tsunami, this approach resulted in very good agreement between the observed and modeled tsunami arrival times. For the 2011 Tohoku tsunami, we found good agreements between the modeled and the observed tsunami arrival times for most of the DARTs except the farthest ones from the source region, where discrepancies as much as 3-4 min. still remain.
Formalization of Error-correcting Codes using SSReflect
Affeldt, Reynald
2015-01-01
By adding redundant information to transmitted data, error-correcting codes (ECCs) make it possible to communicate reliably over noisy channels. Minimizing redundancy and coding/decoding time has driven much research, culminating with Low-Density Parity-Check (LDPC) codes. Hard-disk storage, wifi communications, mobile phones, etc.: most modern devices now rely on ECCs and in particular LDPC codes. Yet, correctness guarantees are only provided by research papers of ever-growing complexity. On...
A Review On Numerical Error Correction Using Various Techniques
Iqra Ahmed
2015-07-01
Full Text Available Abstract From decades the work of symbolic computations cannot be ignored in real time calculations. During the discussion of various automated machines for estimated calculations we came to know where there are inputs and the corresponding outputs the term error is obvious. But the error can be minimized by using different suitable algorithms. This study focusses on techniques used for error correction in numeric and symbolic computations. After reviewing on different techniques discussed before we generate analysis by taking some of the parameters. The Experimental results shows that these algorithm has better performance in terms of accuracy performance cost validity safety security reliability and power consumption.
Leila Hajian
2014-09-01
Full Text Available Written error correction may be the most widely used method for responding to student writing. Although there are various studies investigating error correction, there are little researches considering teachers’ and students’ preferences towards written error correction. The present study investigates students’ and teachers’ preferences and attitudes towards correction of classroom written errors in Iranian EFL context by using questionnaire. In this study, 80 students and 12 teachers were asked to answer the questionnaire. Then data were collected and analyzed by descriptive method. The findings from teachers and students show positive attitudes towards written error correction. Although the results of this study demonstrate teachers and students have some common preferences related to written error correction, there are some important discrepancies. For example; students prefer all error be corrected, but teachers prefer selecting some. However, students prefer teachers’ correction rather than peer or self-correction. This study considers a number of difficulties regarding students and teachers in written error correction processes with some suggestions. This study shows many teachers might believe written error correction takes a lot of time and effort to give comments. This study indicates many students does not have any problems in rewriting their paper after getting feedback. It might be one main positive point to improve their writing and it might give them self-confidence. Keywords: Error correction, teacher feedback, preferences.
Balancing the Lifetime and Storage Overhead on Error Correction for Phase Change Memory.
An, Ning; Wang, Rui; Gao, Yuan; Yang, Hailong; Qian, Depei
2015-01-01
As DRAM is facing the scaling difficulty in terms of energy cost and reliability, some nonvolatile storage materials were proposed to be the substitute or supplement of main memory. Phase Change Memory (PCM) is one of the most promising nonvolatile memory that could be put into use in the near future. However, before becoming a qualified main memory technology, PCM should be designed reliably so that it can ensure the computer system's stable running even when errors occur. The typical wear-out errors in PCM have been well studied, but the transient errors, that caused by high-energy particles striking on the complementary metal-oxide semiconductor (CMOS) circuit of PCM chips or by resistance drifting in multi-level cell PCM, have attracted little focus. In this paper, we propose an innovative mechanism, Local-ECC-Global-ECPs (LEGE), which addresses both soft errors and hard errors (wear-out errors) in PCM memory systems. Our idea is to deploy a local error correction code (ECC) section to every data line, which can detect and correct one-bit errors immediately, and a global error correction pointers (ECPs) buffer for the whole memory chip, which can be reloaded to correct more hard error bits. The local ECC is used to detect and correct the unknown one-bit errors, and the global ECPs buffer is used to store the corrected value of hard errors. In comparison to ECP-6, our method provides almost identical lifetimes, but reduces approximately 50% storage overhead. Moreover, our structure reduces approximately 3.55% access latency overhead by increasing 1.61% storage overhead compared to PAYG, a hard error only solution.
Introduction to error correcting codes in quantum computers
Salas, P J
2006-01-01
The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3
Correcting biased observation model error in data assimilation
Harlim, John
2016-01-01
While the formulation of most data assimilation schemes assumes an unbiased observation model error, in real applications, model error with nontrivial biases is unavoidable. A practical example is the error in the radiative transfer model (which is used to assimilate satellite measurements) in the presence of clouds. As a consequence, many (in fact 99\\%) of the cloudy observed measurements are not being used although they may contain useful information. This paper presents a novel nonparametric Bayesian scheme which is able to learn the observation model error distribution and correct the bias in incoming observations. This scheme can be used in tandem with any data assimilation forecasting system. The proposed model error estimator uses nonparametric likelihood functions constructed with data-driven basis functions based on the theory of kernel embeddings of conditional distributions developed in the machine learning community. Numerically, we show positive results with two examples. The first example is des...
Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua
2016-01-01
High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method.
Analysis of ionospheric refraction error corrections for GRARR systems
Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.
1971-01-01
A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.
an Efficient Blind Signature Scheme based on Error Correcting Codes
Junyao Ye
Full Text Available Cryptography based on the theory of error correcting codes and lattices has received a wide attention in the last years. Shor`s algorithm showed that in a world where quantum computers are assumed to exist, number theoretic cryptosystems are insecure. The ...
Energy efficient error-correcting coding for wireless systems
Shao, Xiaoying
2010-01-01
The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required t
Capacities of Quantum Error Correcting Codes under adaptive concatenation
Fern, J
2007-01-01
We look at the effects of a quantum channel after each level of quantum error correcting codes (QECC) under recovery operators that are optimally adapted at each level. We use the entropy of the channel to estimate the capacities of QECCs. Considerable improvements in capacities are found under adaptive concatenation.
Common Persistence and Error-Correction Mode in Conditional Variance
LI Han-dong; ZHANG Shi-ying
2001-01-01
We firstly define the persistence and common persistence of vector GARCH process from the point of view of the integration, and then discuss the sufficient and necessary condition of the copersistence in variance. In the end of this paper, we give the properties and the error correction model of vector GARCH process under the condition of the co-persistence.
Phase error correction in wavefront curvature sensing via phase retrieval
Almoro, Percival; Hanson, Steen Grüner
2008-01-01
Wavefront curvature sensing with phase error correction system is carried out using phase retrieval based on a partially-developed volume speckle field. Various wavefronts are reconstructed: planar, spherical, cylindrical, and a wavefront passing through the side of a bare optical fiber. Spurious...
75 FR 15371 - Time Error Correction Reliability Standard
2010-03-29
...Pursuant to section 215 of the Federal Power Act, the Commission proposes to remand the proposed revised Time Error Correction Reliability Standard developed by the North American Electric Reliability Corporation (NERC) in order for NERC to develop several modifications to the proposed Reliability Standard. The proposed action ensures that any modifications to Reliability Standards will be......
Long distance quantum communication using quantum error correction
Gingrich, R. M.; Lee, H.; Dowling, J. P.
2004-01-01
We describe a quantum error correction scheme that can increase the effective absorption length of the communication channel. This device can play the role of a quantum transponder when placed in series, or a cyclic quantum memory when inserted in an optical loop.
Direct cointegration testing in error-correction models
F.R. Kleibergen (Frank); H.K. van Dijk (Herman)
1994-01-01
textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The con
Forecasting the Euro exchange rate using vector error correction models
Aarle, B. van; Bos, M.; Hlouskova, J.
2000-01-01
Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex
Communication Systems Simulator with Error Correcting Codes Using MATLAB
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
The Nature and Correction of Diabatic Errors in Anyon Braiding
Christina Knapp
2016-10-01
Full Text Available Topological phases of matter are a potential platform for the storage and processing of quantum information with intrinsic error rates that decrease exponentially with inverse temperature and with the length scales of the system, such as the distance between quasiparticles. However, it is less well understood how error rates depend on the speed with which non-Abelian quasiparticles are braided. In general, diabatic corrections to the holonomy or Berry’s matrix vanish at least inversely with the length of time for the braid, with faster decay occurring as the time dependence is made smoother. We show that such corrections will not affect quantum information encoded in topological degrees of freedom, unless they involve the creation of topologically nontrivial quasiparticles. Moreover, we show how measurements that detect unintentionally created quasiparticles can be used to control this source of error.
Error-Correcting Codes for Reliable Communications in Microgravity Platforms
Filho, Décio L Gazzoni; Tosin, Marcelo C; Granziera, Francisco
2012-01-01
The PAANDA experiment was conceived to characterize the acceleration ambient of a rocket launched microgravity platform, specially the microgravity phase. The recorded data was transmitted to ground stations, leading to loss of telemetry information sent during the reentry period. Traditionally, an error-correcting code for this channel consists of a block code with very large block size to protect against long periods of data loss. Instead, we propose the use of digital fountain codes along with conventional Reed-Solomon block codes to protect against long and short burst error periods, respectively. Aiming to use this approach for a second version of PAANDA to prevent data corruption, we propose a model for the communication channel based on information extracted from Cum\\~a II's telemetry data, and simulate the performance of our proposed error-correcting code under this channel model. Simulation results show that nearly all telemetry data can be recovered, including data from the reentry period.
Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction
Chi-Huang Shih
2012-01-01
Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.
Forward Error Correction Convolutional Codes for RTAs' Networks: An Overview
Salehe I. Mrutu
2014-06-01
Full Text Available For more than half a century, Forward Error Correction Convolutional Codes (FEC-CC have been in use to provide reliable data communication over various communication networks. The recent high increase of mobile communication services that require both bandwidth intensive and interactive Real Time Applications (RTAs impose an increased demand for fast and reliable wireless communication networks. Transmission burst errors; data decoding complexity and jitter are identified as key factors influencing the quality of service of RTAs implementation over wireless transmission media. This paper reviews FEC-CC as one of the most commonly used algorithm in Forward Error Correction for the purpose of improving its operational performance. Under this category, we have analyzed various previous works for their strengths and weaknesses in decoding FEC-CC. A comparison of various decoding algorithms is made based on their decoding computational complexity.
Combined Error Correction Techniques for Quantum Computing Architectures
Byrd, M S; Byrd, Mark S.; Lidar, Daniel A.
2003-01-01
Proposals for quantum computing devices are many and varied. They each have unique noise processes that make none of them fully reliable at this time. There are several error correction/avoidance techniques which are valuable for reducing or eliminating errors, but not one, alone, will serve as a panacea. One must therefore take advantage of the strength of each of these techniques so that we may extend the coherence times of the quantum systems and create more reliable computing devices. To this end we give a general strategy for using dynamical decoupling operations on encoded subspaces. These encodings may be of any form; of particular importance are decoherence-free subspaces and quantum error correction codes. We then give means for empirically determining an appropriate set of dynamical decoupling operations for a given experiment. Using these techniques, we then propose a comprehensive encoding solution to many of the problems of quantum computing proposals which use exchange-type interactions. This us...
Comparison of Topographic Correction Methods
Rudolf Richter
2009-07-01
Full Text Available A comparison of topographic correction methods is conducted for Landsat-5 TM, Landsat-7 ETM+, and SPOT-5 imagery from different geographic areas and seasons. Three successful and known methods are compared: the semi-empirical C correction, the Gamma correction depending on the incidence and exitance angles, and a modified Minnaert approach. In the majority of cases the modified Minnaert approach performed best, but no method is superior in all cases.
Correction of Discretization Errors Simulated at Supply Wells.
MacMillan, Gordon J; Schumacher, Jens
2015-01-01
Many hydrogeology problems require predictions of hydraulic heads in a supply well. In most cases, the regional hydraulic response to groundwater withdrawal is best approximated using a numerical model; however, simulated hydraulic heads at supply wells are subject to errors associated with model discretization and well loss. An approach for correcting the simulated head at a pumping node is described here. The approach corrects for errors associated with model discretization and can incorporate the user's knowledge of well loss. The approach is model independent, can be applied to finite difference or finite element models, and allows the numerical model to remain somewhat coarsely discretized and therefore numerically efficient. Because the correction is implemented external to the numerical model, one important benefit of this approach is that a response matrix, reduced model approach can be supported even when nonlinear well loss is considered.
Farahani, Ali Akbar; Salajegheh, Soory
2015-01-01
Although the provision of error correction is common in education, there are controversies regarding "when" correction is most effective and why it is effective. This study investigated the differences between Iranian English as a foreign language (EFL) teachers and learners regarding their perspectives towards the timeline of error…
The Pedagogy of Error Correction: Surviving the Written Corrective Feedback Challenge
Guenette, Danielle
2012-01-01
Should we correct our students' language errors? Most ESL teachers would answer this question with a resounding Yes while at the same time wondering how to meet the challenge. The collaborative project reported below was designed to provide ESL teacher trainees with an opportunity to experience the ups and downs of providing corrective feedback on…
Pluribus - Exploring the Limits of Error Correction Using a Suffix Tree.
Savel, Daniel; LaFramboise, Thomas; Grama, Ananth; Koyuturk, Mehmet
2016-06-29
Next generation sequencing technologies enable efficient and cost-effective genome sequencing. However, sequencing errors increase the complexity of the de novo assembly process, and reduce the quality of the assembled sequences. Many error correction techniques utilizing substring frequencies have been developed to mitigate this effect. In this paper, we present a novel and effective method called PLURIBUS, for correcting sequencing errors using a generalized suffix trie. PLURIBUS utilizes multiple manifestations of an error in the trie to accurately identify errors and suggest corrections. We show that PLURIBUS produces the least number of false positives across a diverse set of real sequencing datasets when compared to other methods. Furthermore, PLURIBUS can be used in conjunction with other contemporary error correction methods to achieve higher levels of accuracy than either tool alone. These increases in error correction accuracy are also realized in the quality of the contigs that are generated during assembly. We explore, in-depth, the behavior of PLURIBUS, to explain the observed improvement in accuracy and assembly performance. PLURIBUS is freely available at http://compbio.
UNICON Laser Memory: Interlaced Codes for Multi-burst-Error Correction
Lim, R. S.; Korpi, J. E.
1977-01-01
Interlaced binary BCH codes are described for multiple-burst-error correction for the UNICON 690 laser memory. Other multiple-burst-error-correcting codes, such as Reed-Solomon codes and Product codes, are also briefly mentioned. In particular, an interlaced (31, 21) t = 2 BCH code is selected as an outer code for UNICON double-burst-error correction. This code is shortened to (26,16) and interlaced to degree X = 16. Decoding is implemented by table lookup. This method not only avoids all computations in GF(2(exp 5)), it also offers a decoding time of less than 1 ps. The inner code is an existing (80,64) Fire code capable of correcting a single-burst error of length b less than or equal to 6.
陈岚; 赖诚
2014-01-01
To reduce the measurement error caused by temperature error in large-size and high-precision measurements,two measurement error correction methods were described :formula method and constant-temperature method.In the constant-temperature method,how to use the MATLAB and Pro/Msoftware simulation to determine the constant temperature time were introduced which was displayed as dynamic temperature,to eliminate the impact of temperature changes on the measurement results.The effectivenesses of the formula method and the simulation method were compared.%为减少大尺寸检测中温度误差引起的测量误差，介绍修正测量误差的两种方法：公式计算和恒温方法。在恒温方法中，介绍如何利用MATLAB和Pro/M软件模拟来确定工件的恒温时间（以动态温度显示），以消除温度变化对测量结果的影响。并对公式计算和软件仿真两种修正方法的效果进行比较。
Entanglement and Quantum Error Correction with Superconducting Qubits
Reed, Matthew
2015-03-01
Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.
Engineering autonomous error correction in stabilizer codes at finite temperature
Freeman, C. Daniel; Herdman, C. M.; Whaley, K. B.
2017-07-01
We present an error-correcting protocol that enhances the lifetime of stabilizer code-based qubits which are susceptible to the creation of pairs of localized defects (due to stringlike error operators) at finite temperature, such as the toric code. The primary tool employed is periodic application of a local, unitary operator, which exchanges defects and thereby translates localized excitations. Crucially, the protocol does not require any measurements of stabilizer operators and therefore can be used to enhance the lifetime of a qubit in the absence of such experimental resources.
Anthropometric data error detecting and correction with a computer
Chesak, D. D.
1981-01-01
Data obtained with automated anthropometric data aquisition equipment was examined for short term errors. The least squares curve fitting technique was used to ascertain which data values were erroneous and to replace them, if possible, with corrected values. Errors were due to random reflections of light, masking of the light rays, and other types of optical and electrical interference. It was found that the signals were impossible to eliminate from the initial data produced by the television cameras, and that this was primarily a software problem requiring a digital computer to refine the data off line. The specific data of interest was related to the arm reach envelope of a human being.
An Opportunistic Error Correction Layer for OFDM Systems
Shao Xiaoying
2009-01-01
Full Text Available Abstract We propose a novel cross layer scheme to reduce the power consumption of ADCs in OFDM systems. The ADCs in a receiver can consume up to 50% of the total baseband energy. Our scheme is based on resolution-adaptive ADCs and Fountain codes. In a wireless frequency-selective channel some subcarriers have good channel conditions and others are attenuated. The key part of the proposed system is that the dynamic range of ADCs can be reduced by discarding subcarriers that are attenuated by the channel. Correspondingly, the power consumption in ADCs can be decreased. In our approach, each subcarrier carries a Fountain-encoded packet. To protect Fountain-encoded packets against bit errors, an LDPC code has been used. The receiver only decodes subcarriers (i.e., Fountain-encoded packets with the highest SNR. Others are discarded. For that reason a LDPC code with a relatively high code rate can be used. The new error correction layer does not require perfect channel knowledge, so it can be used in a realistic system where the channel is estimated. With our approach, more than 70% of the energy consumption in the ADCs can be saved compared with the conventional IEEE 802.11a WLAN system under the same channel conditions and throughput. In addition, it requires 7.5 dB less SNR than the 802.11a system. To reduce the overhead of Fountain codes, we apply message passing and Gaussian elimination in the decoder. In this way, the overhead is 3% for a small block size (i.e., 500 packets. Using both methods results in an efficient system with low delay.
An Opportunistic Error Correction Layer for OFDM Systems
Xiaoying Shao
2009-01-01
Full Text Available We propose a novel cross layer scheme to reduce the power consumption of ADCs in OFDM systems. The ADCs in a receiver can consume up to 50% of the total baseband energy. Our scheme is based on resolution-adaptive ADCs and Fountain codes. In a wireless frequency-selective channel some subcarriers have good channel conditions and others are attenuated. The key part of the proposed system is that the dynamic range of ADCs can be reduced by discarding subcarriers that are attenuated by the channel. Correspondingly, the power consumption in ADCs can be decreased. In our approach, each subcarrier carries a Fountain-encoded packet. To protect Fountain-encoded packets against bit errors, an LDPC code has been used. The receiver only decodes subcarriers (i.e., Fountain-encoded packets with the highest SNR. Others are discarded. For that reason a LDPC code with a relatively high code rate can be used. The new error correction layer does not require perfect channel knowledge, so it can be used in a realistic system where the channel is estimated. With our approach, more than 70% of the energy consumption in the ADCs can be saved compared with the conventional IEEE 802.11a WLAN system under the same channel conditions and throughput. In addition, it requires 7.5 dB less SNR than the 802.11a system. To reduce the overhead of Fountain codes, we apply message passing and Gaussian elimination in the decoder. In this way, the overhead is 3% for a small block size (i.e., 500 packets. Using both methods results in an efficient system with low delay.
Atmospheric Error Correction of the Laser Beam Ranging
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
Generalized subspace correction methods
Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)
1996-12-31
A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.
Thermalization, Error Correction, and Memory Lifetime for Ising Anyon Systems
Brell, Courtney G.; Burton, Simon; Dauphinais, Guillaume; Flammia, Steven T.; Poulin, David
2014-07-01
We consider two-dimensional lattice models that support Ising anyonic excitations and are coupled to a thermal bath. We propose a phenomenological model for the resulting short-time dynamics that includes pair creation, hopping, braiding, and fusion of anyons. By explicitly constructing topological quantum error-correcting codes for this class of system, we use our thermalization model to estimate the lifetime of the quantum information stored in the encoded spaces. To decode and correct errors in these codes, we adapt several existing topological decoders to the non-Abelian setting. We perform large-scale numerical simulations of these two-dimensional Ising anyon systems and find that the thresholds of these models range from 13% to 25%. To our knowledge, these are the first numerical threshold estimates for quantum codes without explicit additive structure.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-10
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...
Quantum error correction against photon loss using multicomponent cat states
Bergmann, Marcel; van Loock, Peter
2016-10-01
We analyze a generalized quantum error-correction code against photon loss where a logical qubit is encoded into a subspace of a single oscillator mode that is spanned by distinct multicomponent cat states (coherent-state superpositions). We present a systematic code construction that includes the extension of an existing one-photon-loss code to higher numbers of losses. When subject to a photon loss (amplitude damping) channel, the encoded qubits are shown to exhibit a cyclic behavior where the code and error spaces each correspond to certain multiples of losses, half of which can be corrected. As another generalization we also discuss how to protect logical qudits against photon losses, and as an application we consider a one-way quantum communication scheme in which the encoded qubits are periodically recovered while the coherent-state amplitudes are restored as well at regular intervals.
马楠; 周秀骥; 颜鹏; 赵春生
2015-01-01
TSI3563型积分式浊度计是一种性能出色的气溶胶散射系数观测仪器，然而由于仪器设计所固有的限制， TSI3563型浊度计观测结果包含有角度截断和非朗伯体光源两项系统性误差，会使观测结果较真值偏小10％左右。因此，需要对 TSI3563型浊度计的观测结果进行校正才能得到较为精确的散射系数观测值。该研究利用2009年华北平原 HaChi 气溶胶外场观测数据测试了现有校正方法，结果显示，传统的校正方法在我国华北平原这样的高气溶胶污染地区并不适用。为此，提出一种改进的校正方法，利用同时观测的 PM1和 PM10数据，在校正方案中加入超微米粒子体积比这一参量，对于不同体积比采用不同的校正函数。利用实际观测数据检验后发现，改进方法的校正效果相对于传统方法有很大改善。%TSI3563 integrating nephelometer is designed for high-quality in-situ aerosol scattering measurement, which is widely used all over the world.However,the scattering coefficient measured by TSI3563 nephe-lometer contain two systematic errors:The truncation error (i.e.,the geometrical blockage of near-for-ward/backward-scattered light)and the non-Lambertian error (i.e.,the slightly non-cosine weighted in-tensity distribution of illumination light provided by the opal glass diffusor).These errors need to be cor-rected since they can typically cause a bias of about 10% in the measured scattering coefficient.Based on the aerosol properties measured in North China Plain during Hachi (Haze in China)Project,the correction factor is calculated with a traditional method and the Mie model (taken as reference)which requires aerosol number size distribution and refractive index as input.The traditional correction method is widely used all over the world since it requires only data from nephelometer itself.However,results show the traditional method cannot provide a good estimation of the
Bond additivity corrections for quantum chemistry methods
C. F. Melius; M. D. Allendorf
1999-04-01
In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method due to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.
THE USE OF GROUP ERROR CORRECTION IN ENGLISH TEACHING TO INCREASE LEARNER INVOLVEMENT
2000-01-01
In view of the major defect of the traditionalteacher correction,this paper introduces a new ap-proach to error correction—group error correction,inwhich learners’ role in learning language is greatly in-creased.Group error correction can be used to correcterrors in students’ oral work,group work and writtenwork,both in class and after class.Half a year’spractice of group error correction shows that it helpsincrease learner involvement in the teaching and learn-ing process,stimulate learner motivation in learningthe foreign language,raise the learners’ awareness oferrors,facilitate learners’ learning of the foreign lan-guage,relieve the teacher’s burden,and helps theteacher make better teaching plans.Error correction is an enormously complex pro-cess(Ellis,1994,p585).As for which is the most ef-fective method to correct errors,researchers havenot reached an agreement.Therefore more effortsneed to be made in this field.
Cerebellar substrates for error correction in motor conditioning.
Gluck, M A; Allen, M T; Myers, C E; Thompson, R F
2001-11-01
The authors evaluate a mapping of Rescorla and Wagner's (1972) behavioral model of classical conditioning onto the cerebellar substrates for motor reflex learning and illustrate how the limitations of the Rescorla-Wagner model are just as useful as its successes for guiding the development of new psychobiological theories of learning. They postulate that the inhibitory pathway that returns conditioned response information from the cerebellar interpositus nucleus back to the inferior olive is the neural basis for the error correction learning proposed by Rescorla and Wagner (Gluck, Myers, & Thompson, 1994; Thompson, 1986). The authors' cerebellar model expects that behavioral processes described by the Rescorla-Wagner model will be localized within the cerebellum and related brain stem structures, whereas behavioral processes beyond the scope of the Rescorla-Wagner model will depend on extracerebellar structures such as the hippocampus and related cortical regions. Simulations presented here support both implications. Several novel implications of the authors' cerebellar error-correcting model are described including a recent empirical study by Kim, Krupa, and Thompson (1998), who verified that suppressing the putative error correction pathway should interfere with the Kamin (1969) blocking effect, a behavioral manifestation of error correction learning. The authors also discuss the model's implications for understanding the limits of cerebellar contributions to associative learning and how this informs our understanding of hippocampal function in conditioning. This leads to a more integrative view of the neural substrates of conditioning in which the authors' real-time circuit-level model of the cerebellum can be viewed as a generalization of the long-term memory module of Gluck and Myers' (1993) trial-level theory of cerebellar-hippocampal interaction in motor conditioning.
Error-correcting pairs for a public-key cryptosystem
Pellikaan, Ruud; Márquez-Corbella, Irene
2017-06-01
Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.
Forecasting the price of gold: An error correction approach
Kausik Gangopadhyay
2016-03-01
Full Text Available Gold prices in the Indian market may be influenced by a multitude of factors such as the value of gold in investment decisions, as an inflation hedge, and in consumption motives. We develop a model to explain and forecast gold prices in India, using a vector error correction model. We identify investment decision and inflation hedge as prime movers of the data. We also present out-of-sample forecasts of our model and the related properties.
Systematic Error of Acoustic Particle Image Velocimetry and Its Correction
Mickiewicz Witold
2014-08-01
Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.
Error Correcting Coding for a Non-symmetric Ternary Channel
Bitouze, Nicolas; Rosnes, Eirik
2009-01-01
Ternary channels can be used to model the behavior of some memory devices, where information is stored in three different levels. In this paper, error correcting coding for a ternary channel where some of the error transitions are not allowed, is considered. The resulting channel is non-symmetric, therefore classical linear codes are not optimal for this channel. We define the maximum-likelihood (ML) decoding rule for ternary codes over this channel and show that it is complex to compute, since it depends on the channel error probability. A simpler alternative decoding rule which depends only on code properties, called $\\da$-decoding, is then proposed. It is shown that $\\da$-decoding and ML decoding are equivalent, i.e., $\\da$-decoding is optimal, under certain conditions. Assuming $\\da$-decoding, we characterize the error correcting capabilities of ternary codes over the non-symmetric ternary channel. We also derive an upper bound and a constructive lower bound on the size of codes, given the code length and...
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Yingxian Zhang
2014-01-01
Full Text Available We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length.
Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging.
Li, Bo; Liu, Falin; Zhou, Chongbin; Lv, Yuanhao; Hu, Jingqiu
2017-03-17
Defocus of the reconstructed image of synthetic aperture radar (SAR) occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS) radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D) phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM) waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D) phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.
Piggybacking intraocular implants to correct pseudophakic refractive error.
Gayton, J L; Sanders, V; Van der Karr, M; Raanan, M G
1999-01-01
To determine the safety and efficacy of implanting a second intraocular lens (IOL) to correct pseudophakic refractive error. Noncomparative, prospective, consecutive case series. Eight eyes of eight normal pseudophakes and seven eyes of seven postpenetrating keratoplasty (PK) pseudophakes were included in the study. A second intraocular lens (IOL) was implanted anterior to the first in each eye in the study. Efficacy was determined based on the achieved refractive correction and Snellen uncorrected visual acuity measurements. Safety was determined based on loss of best-corrected visual acuity and operative and postoperative complications. Before surgery, spherical equivalents ranged from -5.12 diopters (D) to 7.5 D, with a mean absolute deviation from emmetropia of 3.38 D (1.62). After surgery, spherical equivalents ranged from -2.75 D to 0.5 D, with a mean absolute deviation from emmetropia of 1.21 D (0.90). Before surgery, only 7% of patients had 20/40 or better uncorrected vision, whereas after surgery, 50% had that level of vision. Implanting a second IOL is a viable option for correcting pseudophakic refractive error.
李岩; 王中原; 丁传炳
2012-01-01
To improve the attitude measurement precision of gliding extended-range guided projectile (GERGP), the attitude-angle calculation-method based on single-antenna GPS measurement was analyzed. To realize the full parameters errors correction,a Kalman filter with position,velocity, attitude errors from GPS/INS was designed, and the estimated results were adopted as the INS feedback. The numerical calculation and simulation show that it can reduce the three attitude-angles of the system and quicken the error constringency speed to apply Kalman filter based on GSP attitude-angle measurement to correct INS attitude-angles errors, and the control precision and gliding extended-range effect of GERGP were improved.%为了提高滑翔增程制导炮弹的姿态测量精度,分析了单GPS天线测量弹体姿态角的计算方法,应用GPS和INS全组合的方式,采用INS和GPS的位置、速度和姿态误差信息作为观测量,设计了卡尔曼滤波器,将滤波估计的结果反馈给INS,实现对测量系统全参数的误差修正.数值计算和仿真结果表明,采用基于GPS姿态测量信息的Kalman滤波方法,对INS惯导系统测量误差进行修正,降低了3个姿态角的测量误差,加快了系统误差收敛速度,提高了滑翔控制系统的控制精度,增加了制导炮弹滑翔增程的效果.
On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model
Prasad, K
2010-01-01
Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...
A sparsity-driven approach for joint SAR imaging and phase error correction.
Önhon, N Özben; Cetin, Müjdat
2012-04-01
Image formation algorithms in a variety of applications have explicit or implicit dependence on a mathematical model of the observation process. Inaccuracies in the observation model may cause various degradations and artifacts in the reconstructed images. The application of interest in this paper is synthetic aperture radar (SAR) imaging, which particularly suffers from motion-induced model errors. These types of errors result in phase errors in SAR data, which cause defocusing of the reconstructed images. Particularly focusing on imaging of fields that admit a sparse representation, we propose a sparsity-driven method for joint SAR imaging and phase error correction. Phase error correction is performed during the image formation process. The problem is set up as an optimization problem in a nonquadratic regularization-based framework. The method involves an iterative algorithm, where each iteration of which consists of consecutive steps of image formation and model error correction. Experimental results show the effectiveness of the approach for various types of phase errors, as well as the improvements that it provides over existing techniques for model error compensation in SAR.
Surface code error correction on a defective lattice
Nagayama, Shota; Fowler, Austin G.; Horsman, Dominic; Devitt, Simon J.; Van Meter, Rodney
2017-02-01
The yield of physical qubits fabricated in the laboratory is much lower than that of classical transistors in production semiconductor fabrication. Actual implementations of quantum computers will be susceptible to loss in the form of physically faulty qubits. Though these physical faults must negatively affect the computation, we can deal with them by adapting error-correction schemes. In this paper we have simulated statically placed single-fault lattices and lattices with randomly placed faults at functional qubit yields of 80%, 90%, and 95%, showing practical performance of a defective surface code by employing actual circuit constructions and realistic errors on every gate, including identity gates. We extend Stace et al's superplaquettes solution against dynamic losses for the surface code to handle static losses such as physically faulty qubits [1]. The single-fault analysis shows that a static loss at the periphery of the lattice has less negative effect than a static loss at the center. The randomly faulty analysis shows that 95% yield is good enough to build a large-scale quantum computer. The local gate error rate threshold is ∼ 0.3 % , and a code distance of seven suppresses the residual error rate below the original error rate at p=0.1 % . 90% yield is also good enough when we discard badly fabricated quantum computation chips, while 80% yield does not show enough error suppression even when discarding 90% of the chips. We evaluated several metrics for predicting chip performance, and found that the average of the product of the number of data qubits and the cycle time of a stabilizer measurement of stabilizers gave the strongest correlation with logical error rates. Our analysis will help with selecting usable quantum computation chips from among the pool of all fabricated chips.
Seo, Bong-Chul; Krajewski, Witold F.
2015-12-01
This study offers a method to correct for the radar temporal sampling error when determining radar-rainfall accumulations. The authors evaluate the correction effect with respect to multiple factors associated with storm advection, rainfall characteristics, and different rainfall accumulation time scales. The advection method presented in this study uses linear interpolation of static rain storm locations observed at two intermittent radar sampling times to correct for the missed rainfall accumulations. The advection correction is applied to the high space (0.5 km) and time (5-min) resolution radar-rainfall products provided by the Iowa Flood Center. We use the ground reference data from a high quality and high density rain gauge network distributed over the Turkey River basin in Iowa to evaluate the advection corrected rain fields. We base our evaluation on six rain events and examine the correction performance/improvement with respect to the advection discretization, spatial grid aggregation, rainfall basin coverage, and conditional average rainfall intensity. The results show that the 1-min advection discretization is sufficient to represent the observed distribution of storm velocities for the presented cases. Grid aggregation that is motivated by the need to expedite the computation may induce errors in estimating advection vectors. The authors found that while the advection correction tends to enhance the QPE accuracy for intense rain storms over small or isolated areas, it has little impact on the improvement of light rain estimation.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo
2016-01-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...... than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p = 0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients....... This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel...
Algorithm for correcting optimization convergence errors in Eclipse.
Zacarias, Albert S; Mills, Michael D
2009-10-14
IMRT plans generated in Eclipse use a fast algorithm to evaluate dose for optimization and a more accurate algorithm for a final dose calculation, the Analytical Anisotropic Algorithm. The use of a fast optimization algorithm introduces optimization convergence errors into an IMRT plan. Eclipse has a feature where optimization may be performed on top of an existing base plan. This feature allows for the possibility of arriving at a recursive solution to optimization that relies on the accuracy of the final dose calculation algorithm and not the optimizer algorithm. When an IMRT plan is used as a base plan for a second optimization, the second optimization can compensate for heterogeneity and modulator errors in the original base plan. Plans with the same field arrangement as the initial base plan may be added together by adding the initial plan optimal fluence to the dose correcting plan optimal fluence.A simple procedure to correct for optimization errors is presented that may be implemented in the Eclipse treatment planning system, along with an Excel spreadsheet to add optimized fluence maps together.
Error Analysis of Band Matrix Method
Taniguchi, Takeo; Soga, Akira
1984-01-01
Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.
Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence.
Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos
2016-12-07
When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover's QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels' deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover's QSA at an aggressive depolarizing probability of 10(-3), the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane's quantum error correction code is employed. Finally, apart from Steane's code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered.
The Ryu-Takayanagi Formula from Quantum Error Correction
Harlow, Daniel
2016-01-01
I argue that a version of the quantum-corrected Ryu-Takayanagi formula holds in any quantum error-correcting code. I present this result as a series of theorems of increasing generality, with the final statement expressed in the language of operator-algebra quantum error correction. In AdS/CFT this gives a "purely boundary" interpretation of the formula. I also extend a recent theorem, which established entanglement-wedge reconstruction in AdS/CFT, when interpreted as a subsystem code, to the more general, and I argue more physical, case of subalgebra codes. For completeness, I include a self-contained presentation of the theory of von Neumann algebras on finite-dimensional Hilbert spaces, as well as the algebraic definition of entropy. The results confirm a close relationship between bulk gauge transformations, edge-modes/soft-hair on black holes, and the Ryu-Takayanagi formula. They also suggest a new perspective on the homology constraint, which basically is to get rid of it in a way that preserves the val...
Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence
Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos
2016-12-01
When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10-3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered.
The Ryu-Takayanagi Formula from Quantum Error Correction
Harlow, Daniel
2017-09-01
I argue that a version of the quantum-corrected Ryu-Takayanagi formula holds in any quantum error-correcting code. I present this result as a series of theorems of increasing generality, with the final statement expressed in the language of operator-algebra quantum error correction. In AdS/CFT this gives a "purely boundary" interpretation of the formula. I also extend a recent theorem, which established entanglement-wedge reconstruction in AdS/CFT, when interpreted as a subsystem code, to the more general, and I argue more physical, case of subalgebra codes. For completeness, I include a self-contained presentation of the theory of von Neumann algebras on finite-dimensional Hilbert spaces, as well as the algebraic definition of entropy. The results confirm a close relationship between bulk gauge transformations, edge-modes/soft-hair on black holes, and the Ryu-Takayanagi formula. They also suggest a new perspective on the homology constraint, which basically is to get rid of it in a way that preserves the validity of the formula, but which removes any tension with the linearity of quantum mechanics. Moreover, they suggest a boundary interpretation of the "bit threads" recently introduced by Freedman and Headrick.
The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate
Polio, Charlene
2012-01-01
The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…
A Method for Correction of Radio Wave Refraction Error in Real-time Data Processing%电波折射误差实时修正方法研究
黄家贵; 赵华; 刘元
2011-01-01
目前的实时弹道处理过程中,由于获得探空气象数据比较困难,电波折射误差均采用各种简化方法进行计算,因而大气折射误差计算的精度不高,修正效果不佳.针对这种现状,本文提出了基于双指数模型的射线瞄迹法计算大气折射误差.试算结果表明,该方法的计算精度与事后数据处理所采用的电波折射误差修正方法相当,且能够满足实时处理的时间要求.%Simplified methods are often used to compute radio wave refraction errors in the course of real-time data processing as it is difficult to obtain real weather data acquired by sounding rockets. As a result, the accuracy of atmospheric refraction error computation is not high and the correction effect is not desirable. To solve this problem, improved radial track method based on dual-exponential model is presented in this paper. Trial computation shows that the method is feasible for real-time data processing in speed and its accuracy is commensurate with that of postprocessing method.
Tutorial on Reed-Solomon error correction coding
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
Random access to mobile networks with advanced error correction
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
Concurrent remote entanglement with quantum error correction against photon losses
Roy, Ananda; Stone, A. Douglas; Jiang, Liang
2016-09-01
Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.
Error-correcting pairs for a public-key cryptosystem
Márquez-Corbella, Irene
2012-01-01
Code-based cryptography is an interesting alternative to classic number-theory PKC since it is conjectured to be secure against quantum computer attacks. Many families of codes have been proposed for these cryptosystems, one of the main requirements is having high performance t-bounded decoding algorithms which in the case of having high an error-correcting pair is achieved. In this article the class of codes with a t-ECP is proposed for the McEliece cryptosystem. The hardness of retrieving the t-ECP for a given code is considered. As a first step distinguishers of several subclasses are given.
Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2013-01-01
In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...... of the optical light path and the required amount of throughput going towards the destination node. The result is a dynamic FEC, which can be used to optimize the connections for throughput and/or energy efficiency, depending on the current demand....
Topological Quantum Computation and Error Correction by Biological Cells
Lofthouse, J T
2005-01-01
A Topological examination of phospholipid dynamics in the Far from Equilibrium state has demonstrated that metabolically active cells use waste heat to generate spatially patterned membrane flows by forced convection and shear. This paper explains the resemblance between this nonlinear membrane model and Witten Kitaev type Topological Quantum Computation systems, and demonstrates how this self-organising membrane enables biological cells to circumvent the decoherence problem, perform error correction procedures, and produce classical level output as shielded current flow through cytoskeletal protein conduit. Cellular outputs are shown to be Turing compatible as they are determined by computable in principle hydromagnetic fluid flows, and importantly, are Adaptive from an Evolutionary perspective.
Likelihood-Based Inference in Nonlinear Error-Correction Models
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Error correction and fast detectors implemented by ultrafast neuronal plasticity.
Vardi, Roni; Marmari, Hagar; Kanter, Ido
2014-04-01
We experimentally show that the neuron functions as a precise time integrator, where the accumulated changes in neuronal response latencies, under complex and random stimulation patterns, are solely a function of a global quantity, the average time lag between stimulations. In contrast, momentary leaps in the neuronal response latency follow trends of consecutive stimulations, indicating ultrafast neuronal plasticity. On a circuit level, this ultrafast neuronal plasticity phenomenon implements error-correction mechanisms and fast detectors for misplaced stimulations. Additionally, at moderate (high) stimulation rates this phenomenon destabilizes (stabilizes) a periodic neuronal activity disrupted by misplaced stimulations.
Quantum secret sharing based on quantum error-correcting codes
Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu
2011-01-01
Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k - 1,1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k - 1) threshold scheme. It also takes advantage of classical enhancement of the [2k - 1, l,k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels.
An investigation of error correcting techniques for OMV data
Ingels, Frank; Fryer, John
1992-01-01
Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).
Linear Correction Of The Influence Of Thickness Errors During The Evaporation Process
van der Laan, C. J.; Frankena, H. J.
1983-11-01
During the production of dielectric thin film stacks for optical use, small thickness errors are unavoidable. These can be detrimental for the reflectance curve R as a function of the wavelength λ. If the thickness error for a certain layer is known, however, its influence on the reflectance can be reduced by correcting the thicknesses of the following layers. Starting from the matrix of derivatives ∂Rj/∂tk, where Rj is the reflectance of the j-th extremum and tk the thickness of the k-th layer, a method is developed which calculates these corrections during the production process of the stack. Examples will be given, using a quartz crystal monitoring system by which an error is easy detectable. Using this method, the deviations in the reflectance curve can be reduced by a factor of about five. This resulting reduction is strongly dependent on the error in the last layer of the stack for which no compensation is possible.
Analysis and Correction of Systematic Height Model Errors
Jacobsen, K.
2016-06-01
The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are
ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS
K. Jacobsen
2016-06-01
Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS
Diagnosis of weaknesses in modern error correction codes: a physics approach.
Stepanov, M G; Chernyak, V; Chertkov, M; Vasic, B
2005-11-25
One of the main obstacles to the wider use of the modern error-correction codes is that, due to the complex behavior of their decoding algorithms, no systematic method which would allow characterization of the bit-error-rate (BER) is known. This is especially true at the weak noise where many systems operate and where coding performance is difficult to estimate because of the diminishingly small number of errors. We show how the instanton method of physics allows one to solve the problem of BER analysis in the weak noise range by recasting it as a computationally tractable minimization problem.
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Discretization vs. Rounding Error in Euler's Method
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
Discretization vs. Rounding Error in Euler's Method
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
The Learner as Researcher: Student Concordancing and Error Correction
Jaqueline Mull
2013-03-01
Full Text Available The idea of language learners using a concordancer, to autonomously investigate vocabulary and structure in a target language was suggested over 30 years ago. Since then, some research has explored this idea further, but the potential benefit of concordancers in the hands of learners is still largely unexplored – especially with regards to structure. This study investigates what learners are able to accomplish when asked to investigate an English corpus with a concordancer in order to correct grammar errors in an essay. The study was conducted after only 30 minutes of training on a concordancer. Participants reactions to the software and to analyzing the target language autonomously are also shared. While participants’ reactions were mixed with regards to using a concordacer for error correction, all participants expressed an interest in using a concordancer during their writing process – something which was beyond the scope of this study – but which suggests a potential value for learner exposure to concordancers for autonomous language investigation.
Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage
Juha Partala
2017-01-01
Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.
Novel multipath routing protocol integrated with forward error correction in MANET
AN Hui-yao; LU Xi-cheng; PENG Wei; WANG Yang-yuan
2006-01-01
In order to improve the data transmission reliability of mobile ad hoc network, a routing scheme called integrated forward error correction multipath routing protocol was proposed, which integrates the techniques of packet fragmenting and forward error correction encoding into multipath routing. The scheme works as follows:adding a certain redundancy into the original packets; fragmenting the resulting packets into exclusive blocks of the same size; encoding with the forward error correction technique, and then sending them to the destination node.When the receiving end receives a certain amount of information blocks, the original information will be recovered even with partial loss. The performance of the scheme was evaluated using OPNET modeler. The experimental results show that with the method the average transmission delay is decreased by 20% and the transmission reliability is increased by 300%.
Laser-error-correction control unit for machine tools
Burleson, R.R.
1978-05-23
An ultraprecision machining capability is needed for the laser fusion program. For this work, a precision air-bearing spindle has been mounted horizontally on a modified vertical column of a Moore Number 3 measuring machine base located in a development laboratory at the Oak Ridge Y-12 Plant. An open-loop control system previously installed on this machine was inadequate to meet the upcoming requirements since accuracy is limited to 0.5 ..mu..m by the errors in the machine's gears and leadscrew. A new controller was needed that could monitor the actual position of the machine and perform real-time error correction on the programmed tool path. It was necessary that this project: (1) attain an optimum tradeoff between hardware and software; (2) use a modular design for easy maintenance; (3) use a standard NC tape service; (4) drive the x and y axes with a positioning resolution of 5.08 nm and a feedback resolution of 10 nm; (5) drive the x and y axis motors at a velocity of 0.05 cm/sec in the contouring mode and 0.18 cm/sec in the positioning mode; (6) eliminate the possibility of tape-reader errors; and (7) allow editing of the part description data. The work that was done to develop and install the new machine controller is described.
Detection and correction of inconsistency-based errors in non-rigid registration
Gass, Tobias; Szekely, Gabor; Goksel, Orcun
2014-03-01
In this paper we present a novel post-processing technique to detect and correct inconsistency-based errors in non-rigid registration. While deformable registration is ubiquitous in medical image computing, assessing its quality has yet been an open problem. We propose a method that predicts local registration errors of existing pairwise registrations between a set of images, while simultaneously estimating corrected registrations. In the solution the error is constrained to be small in areas of high post-registration image similarity, while local registrations are constrained to be consistent between direct and indirect registration paths. The latter is a critical property of an ideal registration process, and has been frequently used to asses the performance of registration algorithms. In our work, the consistency is used as a target criterion, for which we efficiently find a solution using a linear least-squares model on a coarse grid of registration control points. We show experimentally that the local errors estimated by our algorithm correlate strongly with true registration errors in experiments with known, dense ground-truth deformations. Additionally, the estimated corrected registrations consistently improve over the initial registrations in terms of average deformation error or TRE for different registration algorithms on both simulated and clinical data, independent of modality (MRI/CT), dimensionality (2D/3D) and employed primary registration method (demons/Markov-randomfield).
Ichikawa, Tamaki; Kitanosono, Takashi; Koizumi, Jun; Ogushi, Yoichi; Tanaka, Osamu; Endo, Jun; Hashimoto, Takeshi; Kawada, Shuichi; Saito, Midori; Kobayashi, Makiko; Imai, Yutaka
2007-12-20
We evaluated the usefulness of radiological reporting that combines continuous speech recognition (CSR) and error correction by transcriptionists. Four transcriptionists (two with more than 10 years' and two with less than 3 months' transcription experience) listened to the same 100 dictation files and created radiological reports using conventional transcription and a method that combined CSR with manual error correction by the transcriptionists. We compared the 2 groups using the 2 methods for accuracy and report creation time and evaluated the transcriptionists' inter-personal dependence on accuracy rate and report creation time. We used a CSR system that did not require the training of the system to recognize the user's voice. We observed no significant difference in accuracy between the 2 groups and 2 methods that we tested, though transcriptionists with greater experience transcribed faster than those with less experience using conventional transcription. Using the combined method, error correction speed was not significantly different between two groups of transcriptionists with different levels of experience. Combining CSR and manual error correction by transcriptionists enabled convenient and accurate radiological reporting.
Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models
J.J.J. Groen (Jan); F.R. Kleibergen (Frank)
1999-01-01
textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these
Inserting Mastered Targets during Error Correction When Teaching Skills to Children with Autism
Plaisance, Lauren; Lerman, Dorothea C.; Laudont, Courtney; Wu, Wai-Ling
2016-01-01
Research has identified a variety of effective approaches for responding to errors during discrete-trial training. In one commonly used method, the therapist delivers a prompt contingent on the occurrence of an incorrect response and then re-presents the trial so that the learner has an opportunity to perform the correct response independently.…
MBTI Personality Type and the Utility of Error Correction among English Majors in Taiwan
Jones, Nathan Brian; Wang, Shun Hwa
2004-01-01
The issue of whether or not to correct errors in students' writing is controversial. Some scholars argue that error correction is helpful, while others argue that it is ineffective, perhaps even harmful. What is missing from the literature are studies about how error correction might affect the performance of specific types of students. This…
Topics in quantum cryptography, quantum error correction, and channel simulation
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
The role of prior knowledge in error correction for younger and older adults.
Sitzman, Danielle M; Rhodes, Matthew G; Tauber, Sarah K; Liceralde, Van Rynald T
2015-01-01
Previous work has demonstrated that, when given feedback, younger adults are more likely to correct high-confidence errors compared with low-confidence errors, a finding termed the hypercorrection effect. Research examining the hypercorrection effect in both older and younger adults has demonstrated that the relationship between confidence and error correction was stronger for younger adults compared with older adults. Their results demonstrated that the relationship between confidence and error correction was stronger for younger adults compared with older adults. However, recent work suggests that error correction is largely related to prior knowledge, while confidence may primarily serve as a proxy for prior knowledge. Prior knowledge generally remains stable or increases with age; thus, the current experiment explored how both confidence and prior knowledge contributed to error correction in younger and older adults. Participants answered general knowledge questions, rated how confident they were that their response was correct, received correct answer feedback, and rated their prior knowledge of the correct response. Overall, confidence was related to error correction for younger adults, but this relationship was much smaller for older adults. However, prior knowledge was strongly related to error correction for both younger and older adults. Confidence alone played little unique role in error correction after controlling for the role of prior knowledge. These data demonstrate that prior knowledge largely predicts error correction and suggests that both older and younger adults can use their prior knowledge to effectively correct errors in memory.
Disorder-assisted error correction in Majorana chains
Bravyi, Sergey
2011-01-01
It was recently realized that quenched disorder may enhance the reliability of topological qubits by reducing the mobility of anyons at zero temperature. Here we compute storage times with and without disorder for quantum chains with unpaired Majorana fermions - the simplest toy model of a quantum memory. Disorder takes the form of a random site-dependent chemical potential. The corresponding one-particle problem is a one-dimensional Anderson model with disorder in the hopping amplitudes. We focus on the zero-temperature storage of a qubit encoded in the ground state of the Majorana chain. Storage and retrieval are modeled by a unitary evolution under the memory Hamiltonian with an unknown weak perturbation followed by an error-correction step. Assuming dynamical localization of the one-particle problem, we show that the storage time grows exponentially with the system size. We give supporting evidence for the required localization property by estimating Lyapunov exponents of the one-particle eigenfunctions. ...
Likelihood-Based Inference in Nonlinear Error-Correction Models
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Electrostatic stiffness correction for quadrature error in decoupled dual-mass MEMS gyroscope
Li, Hongsheng; Cao, Huiliang; Ni, Yunfang
2014-07-01
This paper proposes an electrostatic stiffness correction method for the quadrature error (QUER) in a decoupled dual-mass gyroscope structure. The QUER is caused by the imperfections during the structure manufacturing process, and the two masses usually have different QUERs. The harm contribution to the Coriolis signal is analyzed and quantified. The generating forms of QUER motion in both masses are analyzed, the correction electrodes' working principle is introduced, and a single mass individual correction method is proposed. The QUER stiffness correction system is designed based on a PI controller, and the experiments are arranged to verify the theoretical analysis. The bias stability decreases from 2.06 to 0.64 deg/h after the QUER correction, and the parameters of scale factor such as nonlinearly, asymmetry, and repeatability, reduce from 143, 557, and 210 ppm to 84, 242, and 175 ppm, respectively.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice.
A Study of Quantum Error Correction by Geometric Algebra and Liquid-State NMR Spectroscopy
Sharf, Y; Somaroo, S S; Havel, T F; Knill, E H; Laflamme, R; Sharf, Yehuda; Cory, David G.; Somaroo, Shyamal S.; Havel, Timothy F.; Knill, Emanuel; Laflamme, Raymond
2000-01-01
Quantum error correcting codes enable the information contained in a quantum state to be protected from decoherence due to external perturbations. Applied to NMR, quantum coding does not alter normal relaxation, but rather converts the state of a ``data'' spin into multiple quantum coherences involving additional ancilla spins. These multiple quantum coherences relax at differing rates, thus permitting the original state of the data to be approximately reconstructed by mixing them together in an appropriate fashion. This paper describes the operation of a simple, three-bit quantum code in the product operator formalism, and uses geometric algebra methods to obtain the error-corrected decay curve in the presence of arbitrary correlations in the external random fields. These predictions are confirmed in both the totally correlated and uncorrelated cases by liquid-state NMR experiments on 13C-labeled alanine, using gradient-diffusion methods to implement these idealized decoherence models. Quantum error correcti...
Quantum error-correcting codes need not completely reveal the error syndrome
Shor, P W; Shor, Peter W; Smolin, John A
1996-01-01
Quantum error-correcting codes so far proposed have not been able to work in the presence of noise levels which introduce greater than one bit of entropy per qubit sent through the quantum channel. This has been because all such codes either find the complete error syndrome of the noise or trivially map onto such codes. We describe a code which does not find complete information on the noise and can be used for reliable transmission of quantum information through channels which introduce more than one bit of entropy per transmitted bit. In the case of the depolarizing ``Werner'' channel our code can be used in a channel of fidelity .8096 while the best existing code worked only down to .8107.
Transition State Theory: Variational Formulation, Dynamical Corrections, and Error Estimates
vanden-Eijnden, Eric
2009-03-01
Transition state theory (TST) is discussed from an original viewpoint: it is shown how to compute exactly the mean frequency of transition between two predefined sets which either partition phase space (as in TST) or are taken to be well separate metastable sets corresponding to long-lived conformation states (as necessary to obtain the actual transition rate constants between these states). Exact and approximate criterions for the optimal TST dividing surface with minimum recrossing rate are derived. Some issues about the definition and meaning of the free energy in the context of TST are also discussed. Finally precise error estimates for the numerical procedure to evaluate the transmission coefficient κS of the TST dividing surface are given, and it shown that the relative error on κS scales as 1/√κS when κS is small. This implies that dynamical corrections to the TST rate constant can be computed efficiently if and only if the TST dividing surface has a transmission coefficient κS which is not too small. In particular the TST dividing surface must be optimized upon (for otherwise κS is generally very small), but this may not be sufficient to make the procedure numerically efficient (because the optimal dividing surface has maximum κS, but this coefficient may still be very small).
Error correction in short time steps during the application of quantum gates
Castro, L.A. de, E-mail: leonardo.castro@usp.br; Napolitano, R.D.J.
2016-04-15
We propose a modification of the standard quantum error-correction method to enable the correction of errors that occur due to the interaction with a noisy environment during quantum gates without modifying the codification used for memory qubits. Using a perturbation treatment of the noise that allows us to separate it from the ideal evolution of the quantum gate, we demonstrate that in certain cases it is necessary to divide the logical operation in short time steps intercalated by correction procedures. A prescription of how these gates can be constructed is provided, as well as a proof that, even for the cases when the division of the quantum gate in short time steps is not necessary, this method may be advantageous for reducing the total duration of the computation.
Bogner, K.; Pappenberger, F.
2011-07-01
River discharge predictions often show errors that degrade the quality of forecasts. Three different methods of error correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next time steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The error correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean absolute error, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast errors with scale properties of unknown source and statistical structure.
ECHO: a reference-free short-read error correction algorithm.
Kao, Wei-Chun; Chan, Andrew H; Song, Yun S
2011-07-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth.
Detecting Positioning Errors and Estimating Correct Positions by Moving Window.
Song, Ha Yoon; Lee, Jun Seok
2015-01-01
In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research.
Information-theoretic approach to quantum error correction and reversible measurement
Nielsen, M A; Schumacher, B; Barnum, H N; Caves, Carlton M.; Schumacher, Benjamin; Barnum, Howard
1997-01-01
Quantum operations provide a general description of the state changes allowed by quantum mechanics. The reversal of quantum operations is important for quantum error-correcting codes, teleportation, and reversing quantum measurements. We derive information-theoretic conditions and equivalent algebraic conditions that are necessary and sufficient for a general quantum operation to be reversible. We analyze the thermodynamic cost of error correction and show that error correction can be regarded as a kind of ``Maxwell demon,'' for which there is an entropy cost associated with information obtained from measurements performed during error correction. A prescription for thermodynamically efficient error correction is given.
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
Yehezkel, Tuval Ben; Linshiz, Gregory; Kaplan, Shai; Gronau, Ilan; Ravid, Sivan; Adar, Rivka; Shapiro, Ehud
2011-01-01
Making error-free, custom DNA assemblies from potentially faulty building blocks is a fundamental challenge in synthetic biology. Here, we show how recursion can be used to address this challenge using a recursive procedure that constructs error-free DNA molecules and their libraries from error-prone synthetic oligonucleotides and naturally existing DNA. Specifically, we describe how divide and conquer (D&C), the quintessential recursive problem-solving technique, is applied in silico to divide target DNA sequences into overlapping, albeit error prone, oligonucleotides, and how recursive construction is applied in vitro to combine them to form error-prone DNA molecules. To correct DNA sequence errors, error-free fragments of these molecules are then identified, extracted, and used as new, typically longer and more accurate, inputs to another iteration of the recursive construction procedure; the entire process repeats until an error-free target molecule is formed. The method allows combining synthetic and natural DNA fragments into error-free designer DNA libraries, thus providing a foundation for the design and construction of complex synthetic DNA assemblies.
Cohen, Avi; Lange, Falk; Ben-Zvi, Guy; Graitzer, Erez; Vladimir, Dmitriev
2012-11-01
The ITRS roadmap specifies wafer overlay control as one of the major tasks for the sub 40 nm nodes in addition to CD control and defect control. Wafer overlay is strongly dependent on mask image placement error (registration errors or Reg errors)1. The specifications for registration or mask placement accuracy are significantly tighter in some of the double patterning techniques (DPT). This puts a heavy challenge on mask manufacturers (mask shops) to comply with advanced node registration specifications. The conventional methods of feeding back the systematic registration error to the E-beam writer and re-writing the mask are becoming difficult, expensive and not sufficient for the advanced nodes especially for double pattering technologies. Six production masks were measured on a standard registration metrology tool and the registration errors were calculated and plotted. Specially developed algorithm along with the RegC Wizard (dedicated software) was used to compute a correction lateral strain field that would minimize the registration errors. This strain field was then implemented in the photomask bulk material using an ultra short pulse laser based system. Finally the post process registration error maps were measured and the resulting residual registration error field with and without scale and orthogonal errors removal was calculated. In this paper we present a robust process flow in the mask shop which leads up to 32% registration 3sigma improvement, bringing some out-of-spec masks into spec, utilizing the RegC® process in the photomask periphery while leaving the exposure field optically unaffected.
Repeated quantum error correction on a continuously encoded qubit by real-time feedback.
Cramer, J; Kalb, N; Rol, M A; Hensen, B; Blok, M S; Markham, M; Twitchen, D J; Hanson, R; Taminiau, T H
2016-05-05
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Repeated quantum error correction on a continuously encoded qubit by real-time feedback
Cramer, J.; Kalb, N.; Rol, M. A.; Hensen, B.; Blok, M. S.; Markham, M.; Twitchen, D. J.; Hanson, R.; Taminiau, T. H.
2016-05-01
Reliable quantum information processing in the face of errors is a major fundamental and technological challenge. Quantum error correction protects quantum states by encoding a logical quantum bit (qubit) in multiple physical qubits. To be compatible with universal fault-tolerant computations, it is essential that states remain encoded at all times and that errors are actively corrected. Here we demonstrate such active error correction on a continuously protected logical qubit using a diamond quantum processor. We encode the logical qubit in three long-lived nuclear spins, repeatedly detect phase errors by non-destructive measurements, and apply corrections by real-time feedback. The actively error-corrected qubit is robust against errors and encoded quantum superposition states are preserved beyond the natural dephasing time of the best physical qubit in the encoding. These results establish a powerful platform to investigate error correction under different types of noise and mark an important step towards fault-tolerant quantum information processing.
Multi-bit upset aware hybrid error-correction for cache in embedded processors
Jiaqi, Dong; Keni, Qiu; Weigong, Zhang; Jing, Wang; Zhenzhen, Wang; Lihua, Ding
2015-11-01
For the processor working in the radiation environment in space, it tends to suffer from the single event effect on circuits and system failures, due to cosmic rays and high energy particle radiation. Therefore, the reliability of the processor has become an increasingly serious issue. The BCH-based error correction code can correct multi-bit errors, but it introduces large latency overhead. This paper proposes a hybrid error correction approach that combines BCH and EDAC to correct both multi-bit and single-bit errors for caches with low cost. The proposed technique can correct up to four-bit error, and correct single-bit error in one cycle. Evaluation results show that, the proposed hybrid error-correction scheme can improve the performance of cache accesses up to 20% compared to the pure BCH scheme.
THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING
Ketut Santi Indriani
2015-05-01
Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.
Correcting sequencing errors in DNA coding regions using a dynamic programming approach.
Xu, Y; Mural, R J; Uberbacher, E C
1995-04-01
This paper presents an algorithm for detecting and 'correcting' sequencing errors that occur in DNA coding regions. The types of sequencing errors addressed are insertions and deletions (indels) of DNA bases. The goal is to provide a capability which makes single-pass or low-redundancy sequence data more informative, reducing the need for high-redundancy sequencing for gene identification and characterization purposes. This would permit improved sequencing efficiency and reduce genome sequencing costs. The algorithm detects sequencing errors by discovering changes in the statistically preferred reading frame within a putative coding region and then inserts a number of 'neutral' bases at a perceived reading frame transition point to make the putative exon candidate frame consistent. We have implemented the algorithm as a front-end subsystem of the GRAIL DNA sequence analysis system to construct a version which is very error tolerant and also intend to use this as a testbed for further development of sequencing error-correction technology. Preliminary test results have shown the usefulness of this algorithm and also exhibited some of its weakness, providing possible directions for further improvement. On a test set consisting of 68 human DNA sequences with 1% randomly generated indels in coding regions, the algorithm detected and corrected 76% of the indels. The average distance between the position of an indel and the predicted one was 9.4 bases. With this subsystem in place, GRAIL correctly predicted 89% of the coding messages with 10% false message on the 'corrected' sequences, compared to 69% correctly predicted coding messages and 11% falsely predicted messages on the 'corrupted' sequences using standard GRAIL II method (version 1.2).(ABSTRACT TRUNCATED AT 250 WORDS)
A. Lipponen
2013-04-01
Full Text Available In atmospheric models, due to their computational time or resource limitations, physical processes have to be simulated using reduced models. The use of a reduced model, however, induces errors to the simulation results. These errors are referred to as approximation errors. In this paper, we propose a novel approach to correct these approximation errors. We model the approximation error as an additive noise process in the simulation model and employ the Random Forest (RF regression algorithm for constructing a computationally low cost predictor for the approximation error. In this way, the overall simulation problem is decomposed into two separate and computationally efficient simulation problems: solution of the reduced model and prediction of the approximation error realization. The approach is tested for handling approximation errors due to a reduced coarse sectional representation of aerosol size distribution in a cloud droplet activation calculation. The results show a significant improvement in the accuracy of the simulation compared to the conventional simulation with a reduced model. The proposed approach is rather general and extension of it to different parameterizations or reduced process models that are coupled to geoscientific models is a straightforward task. Another major benefit of this method is that it can be applied to physical processes that are dependent on a large number of variables making them difficult to be parameterized by traditional methods.
Chung, Byunghoon; Lee, Hun; Choi, Bong Joon; Seo, Kyung Ryul; Kim, Eung Kwon; Kim, Dae Yune
2017-01-01
Purpose The purpose of this study was to investigate the clinical efficacy of an optimized prolate ablation procedure for correcting residual refractive errors following laser surgery. Methods We analyzed 24 eyes of 15 patients who underwent an optimized prolate ablation procedure for the correction of residual refractive errors following laser in situ keratomileusis, laser-assisted subepithelial keratectomy, or photorefractive keratectomy surgeries. Preoperative ophthalmic examinations were performed, and uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction values (sphere, cylinder, and spherical equivalent), point spread function, modulation transfer function, corneal asphericity (Q value), ocular aberrations, and corneal haze measurements were obtained postoperatively at 1, 3, and 6 months. Results Uncorrected distance visual acuity improved and refractive errors decreased significantly at 1, 3, and 6 months postoperatively. Total coma aberration increased at 3 and 6 months postoperatively, while changes in all other aberrations were not statistically significant. Similarly, no significant changes in point spread function were detected, but modulation transfer function increased significantly at the postoperative time points measured. Conclusions The optimized prolate ablation procedure was effective in terms of improving visual acuity and objective visual performance for the correction of persistent refractive errors following laser surgery. PMID:28243019
Lize, Yannick K; Christen, Louis; Nazarathy, Moshe; Nuccio, Scott; Wu, Xiaoxia; Willner, Alan E; Kashyap, Raman
2007-05-28
We present an optical multipath error correction technique for differentially encoded modulation formats such as differential-phase-shift-keying (DPSK) and differential polarization shift keying (DPolSK) for fiber-based and free-space communication. This multipath error correction method combines optical and electronic logic gates. The scheme can easily be implemented using commercially available interferometers and high speed logic gates and does not require any data overhead therefore does not affect the effective bandwidth of the transmitted data. It is not merely compatible but also complementary to error correction codes commonly used in optical transmission systems such as forward-error-correction (FEC). The technique consists of separating the demodulation at the receiver in multiple paths. Each path consists of a Mach-Zehnder interferometer with a different integer bit delay used in each path. Some basic logic operations follow and the three paths are compared using a simple majority vote algorithm. Experimental results show that the scheme improves receiver sensitivity by 1.5 dB at BER of 10(-3),in back-to-back configuration. Numerical results indicate a 1.6 dB improvement in the presence of Chromatic Dispersion for a 25% increase in tolerance for a 3dB penalty from +/-1220 ps/nm to +/-1520 ps/nm. and a 0.35 dB improvement for back-to-back operation.
Practical retrace error correction in non-null aspheric testing: A comparison
Shi, Tu; Liu, Dong; Zhou, Yuhao; Yan, Tianliang; Yang, Yongying; Zhang, Lei; Bai, Jian; Shen, Yibing; Miao, Liang; Huang, Wei
2017-01-01
In non-null aspheric testing, retrace error forms the primary error source, making it hard to recognize the desired figure error from the aliasing interferograms. Careful retrace error correction is a must bearing on the testing results. Performance of three commonly employed methods in practical, i.e. the GDI (geometrical deviation based on interferometry) method, the TRW (theoretical reference wavefront) method and the ROR (reverse optimization reconstruction) method, are compared with numerical simulations and experiments. Dynamic range of these methods are sought out and the application is recommended. It is proposed that with aspherical reference wavefront, dynamic range can be further enlarged. Results show that the dynamic range of the GDI method is small while that of the TRW method can be enlarged with aspherical reference wavefront, and the ROR method achieves the largest dynamic range with highest accuracy. It is recommended that the GDI and TRW methods be applied to apertures with small figure error and small asphericity, and the ROR method for commercial and research applications calling for high accuracy and large dynamic range.
Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco
2014-06-11
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Nazelie Kassabian
2014-06-01
Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.
Using ridge regression in systematic pointing error corrections
Guiar, C. N.
1988-01-01
A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.
Disorder-Assisted Error Correction in Majorana Chains
Bravyi, Sergey; König, Robert
2012-12-01
It was recently realized that quenched disorder may enhance the reliability of topological qubits by reducing the mobility of anyons at zero temperature. Here we compute storage times with and without disorder for quantum chains with unpaired Majorana fermions — the simplest toy model of a quantum memory. Disorder takes the form of a random site-dependent chemical potential. The corresponding one-particle problem is a one-dimensional Anderson model with disorder in the hopping amplitudes. We focus on the zero-temperature storage of a qubit encoded in the ground state of the Majorana chain. Storage and retrieval are modeled by a unitary evolution under the memory Hamiltonian with an unknown weak perturbation followed by an error-correction step. Assuming dynamical localization of the one-particle problem, we show that the storage time grows exponentially with the system size. We give supporting evidence for the required localization property by estimating Lyapunov exponents of the one-particle eigenfunctions. We also simulate the storage process for chains with a few hundred sites. Our numerical results indicate that in the absence of disorder, the storage time grows only as a logarithm of the system size. We provide numerical evidence for the beneficial effect of disorder on storage times and show that suitably chosen pseudorandom potentials can outperform random ones.
Ancient DNA sequence revealed by error-correcting codes.
Brandão, Marcelo M; Spoladore, Larissa; Faria, Luzinete C B; Rocha, Andréa S L; Silva-Filho, Marcio C; Palazzo, Reginaldo
2015-07-10
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code.
Quantum error correction against photon loss using NOON states
Bergmann, Marcel; van Loock, Peter
2016-07-01
The so-called NOON states are quantum optical resources known to be useful especially for quantum lithography and metrology. At the same time, they are known to be very sensitive to photon losses and rather hard to produce experimentally. Concerning the former, here we present a scheme where NOON states are the elementary resources for building quantum error-correction codes against photon losses, thus demonstrating that such resources can also be useful to suppress the effect of loss. Our NOON code is an exact code that can be systematically extended from one-photon to higher-number losses. Its loss scaling depending on the codeword photon number is the same as for some existing, exact loss codes such as bosonic and quantum parity codes, but its codeword mode number is intermediate between that of the other codes. Another generalization of the NOON code is given for arbitrary logical qudits instead of logical qubits. While, in general, the final codewords are always obtainable from multimode NOON states through application of beam splitters, both codewords for the one-photon-loss qubit NOON code can be simply created from single-photon states with beam splitters. We give various examples and also discuss a potential application of our qudit code for quantum communication.
19 CFR 173.4 - Correction of clerical error, mistake of fact, or inadvertence.
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Correction of clerical error, mistake of fact, or... clerical error, mistake of fact, or inadvertence. (a) Authority to review and correct. Even though a valid...)(1), Tariff Act of 1930, as amended (19 U.S.C. 1520(c)(1), a clerical error, mistake of fact,...
Error Correction in the L2 Writing Classroom: What Do Students Think?
Lee, Icy
2005-01-01
Error correction research has focused mostly on teachers' strategies and their effects on student writing. Much less has been done to find out about students' beliefs and attitudes about teachers' feedback on errors. This study aimed to investigate L2 students' perceptions, beliefs, and attitudes about error correction in the writing classroom.…
How EFL Students Can Use Google to Correct Their "Untreatable" Written Errors
Geiller, Luc
2014-01-01
This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several "untreatable" written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback…
Adamek, Jiri
1991-01-01
Although devoted to constructions of good codes for error control, secrecy or data compression, the emphasis is on the first direction. Introduces a number of important classes of error-detecting and error-correcting codes as well as their decoding methods. Background material on modern algebra is presented where required. The role of error-correcting codes in modern cryptography is treated as are data compression and other topics related to information theory. The definition-theorem proof style used in mathematics texts is employed through the book but formalism is avoided wherever possible.
Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic genome.
Goodwin, Sara; Gurtowski, James; Ethe-Sayers, Scott; Deshpande, Panchajanya; Schatz, Michael C; McCombie, W Richard
2015-11-01
Monitoring the progress of DNA molecules through a membrane pore has been postulated as a method for sequencing DNA for several decades. Recently, a nanopore-based sequencing instrument, the Oxford Nanopore MinION, has become available, and we used this for sequencing the Saccharomyces cerevisiae genome. To make use of these data, we developed a novel open-source hybrid error correction algorithm Nanocorr specifically for Oxford Nanopore reads, because existing packages were incapable of assembling the long read lengths (5-50 kbp) at such high error rates (between ∼5% and 40% error). With this new method, we were able to perform a hybrid error correction of the nanopore reads using complementary MiSeq data and produce a de novo assembly that is highly contiguous and accurate: The contig N50 length is more than ten times greater than an Illumina-only assembly (678 kb versus 59.9 kbp) and has >99.88% consensus identity when compared to the reference. Furthermore, the assembly with the long nanopore reads presents a much more complete representation of the features of the genome and correctly assembles gene cassettes, rRNAs, transposable elements, and other genomic features that were almost entirely absent in the Illumina-only assembly.
Ma, Xiaoli; Chen, Yutong; Liu, Xianjie; Ning, Hong
2016-01-01
Background Identifying and assessing retinal nerve fiber layer defects are important for diagnosing and managing glaucoma. We aimed to investigate the effect of refractive correction error on retinal nerve fiber layer (RNFL) thickness measured with Spectralis spectral-domain optical coherence tomography (SD-OCT). Material/Methods We included 68 participants: 32 healthy (normal) and 36 glaucoma patients. RNFL thickness was measured using Spectralis SD-OCT circular scan. Measurements were made with a refractive correction of the spherical equivalent (SE), the SE+2.00D and the SE–2.00D. Results Average RNFL thickness was significantly higher in the normal group (105.88±10.47 μm) than in the glaucoma group (67.67±17.27 μm, Prefractive correction error significantly affected measurements of average (Prefractive correction error significantly increased average (Prefractive correction. However, −2.00D of refractive correction error did not significantly affect RNFL thickness measurements in either group. Conclusions Positive defocus error significantly affects RNFL thickness measurements made by the Spectralis SD-OCT. Negative defocus error did not affect RNFL measurement examined. Careful correction of refractive error is necessary to obtain accurate baseline and follow-up RNFL thickness measurements in healthy and glaucomatous eyes. PMID:28030536
A two-dimensional matrix correction for off-axis portal dose prediction errors
Bailey, Daniel W. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Kumaraswamy, Lalith; Bakhtiari, Mohammad [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Podgorsak, Matthew B. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)
2013-05-15
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone
Post-Editing Error Correction Algorithm for Speech Recognition using Bing Spelling Suggestion
Bassil, Youssef
2012-01-01
ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing's online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing's spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in differen...
Energy efficiency of error correcting mechanisms for wireless communications
Havinga, Paul J.M.
We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal
Secure deterministic communication in a quantum loss channel using quantum error correction code
Wu Shuang; Liang Lin-Mei; Li Cheng-Zu
2007-01-01
The loss of a quantum channel leads to an irretrievable particle loss as well as information. In this paper, the loss of quantum channel is analysed and a method is put forward to recover the particle and information loss effectively using universal quantum error correction. Then a secure direct communication scheme is proposed, such that in a loss channel the information that an eavesdropper can obtain would be limited to arbitrarily small when the code is properly chosen and the correction operation is properly arranged.
Lauve, A D; Siebers, J V; Crimaldi, A J; Hagan, M P; Kealla, P J
2006-06-01
Traditionally, pretreatment detected patient-positioning errors have been corrected by repositioning the couch to align the patient to the treatment beam. We investigated an alternative strategy: aligning the beam to the patient by repositioning the dynamic multileaf collimator and adjusting the beam weights, termed dynamic compensation. The purpose of this study was to determine the geometric range of positioning errors for which the dynamic compensation method is valid in prostate cancer patients treated with three-dimensional conformal radiotherapy. Twenty-five previously treated prostate cancer patients were replanned using a four-field technique to deliver 72 Gy to 95% of the planning target volume (PTV). Patient-positioning errors were introduced by shifting the patient reference frame with respect to the treatment isocenter. Thirty-six randomly selected isotropic displacements with magnitudes of 1.0, 2.0, 4.0, 6.0, 8.0, and 10.0 cm were sampled for each patient, for a total of 5400 errors. Dynamic compensation was used to correct each of these errors by conforming the beam apertures to the new target position and adjusting the monitor units using inverse-square and off-axis factor corrections. The dynamic compensation plans were then compared with the original treatment plans via dose-volume histogram (DVH) analysis. Changes of more than 5% of the prescription dose, 3.6 Gy, were deemed significant. Compared with the original treatment plans, dynamic compensation produced small discrepancies in isodose distributions and DVH analyses. These differences increased with the magnitudes of the initial patient-positioning errors. Coverage of the PTV was excellent: D95 and Dmean were not increased or decreased by more than 5% of the prescription dose, and D5 was not decreased by more than 5% of the prescription dose for any of the 5400 simulated positioning errors. D5 was increased by more than 5% of the prescription dose in only three of the 5400 positioning errors
Studies on National Preparatory Students’English Oral Errors and Corrections
李媛媛
2014-01-01
This paper, based on the theory and teaching practice, presents a tentative analysis about English oral errors commonly made by university’s national preparatory students. At first, I analyze the causes of oral errors, then review teachers ’different atti-tude towards oral errors and finally propose some main principles and factors and possible strategies of oral error corrections.
5 CFR 1605.16 - Claims for correction of employing agency errors; time limitations.
2010-01-01
... of employing agency errors; time limitations. (a) Agency's discovery of error. Upon discovery of an... it, but, in any event, the agency must act promptly in doing so. (b) Participant's discovery of error. If an agency fails to discover an error of which a participant has knowledge involving the correct...
Reed-Solomon error-correction as a software patch mechanism.
Pendley, Kevin D.
2013-11-01
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Error correcting code with chip kill capability and power saving enhancement
Gara, Alan G.; Chen, Dong; Coteus, Paul W.; Flynn, William T.; Marcella, James A.; Takken, Todd; Trager, Barry M.; Winograd, Shmuel
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Results of error correction techniques applied on two high accuracy coordinate measuring machines
Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R. (Sandia National Labs., Albuquerque, NM (USA); National Inst. of Standards and Technology, Gaithersburg, MD (USA))
1990-01-01
The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.
Local Influence on the Error-Correction Variable in a Cointegrated System
无
2001-01-01
The concept of cointegration describes an equilibrium relationship among a set of time-varying variables,and the cointegrated relationship can be represented through an error-correction model (ECM). The error-correction variable, which represents the short-run discrepancy from the equilibrium state in a cointegrated system, plays an important role in the ECM. It is natural to ask how the error-correction mechanism works, or equivalently, how the short-run discrepancy affects the development of the cointegrated system? This paper examines the effect or local influence on the error-correction variable in an error-correction model. Following the argument of the second-order approach to local influence suggested by reference [5], we develop a diagnostic statistic to examine the local influence on the estimation of the parameter associated with the error-correction variable in an ECM. An empirical example is presented to illustrate the application of the proposed diagnostic. We find that the short-run discrepancy may have strong influence on the estimation of the parameter associated with the error-correction model. It is the error-correction variable that the short-run discrepancies can be incorporated through the error-correction mechanism.``
王雪珍
2009-01-01
Learning English as a foreign language is a step-by-step process,during which Chinese students will inevitably make some errors.It is important for teachers to know when and how to correct students' errors.By employing error conection skillfully and appropriately,we can develop the students' self-confidence and self-esteem.This paper mainly discusses the interaction and collaboration learning in task-based learning,which has been proved to be scientific and effective,is advocated in English learning and teaching.Accordingly,the self-correction with teacher's help and the peer correction arethemost effective waysfor students' error correction.
Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly.
Sameith, Katrin; Roscito, Juliana G; Hiller, Michael
2017-01-01
Next-generation sequencers such as Illumina can now produce reads up to 300 bp with high throughput, which is attractive for genome assembly. A first step in genome assembly is to computationally correct sequencing errors. However, correcting all errors in these longer reads is challenging. Here, we show that reads with remaining errors after correction often overlap repeats, where short erroneous k-mers occur in other copies of the repeat. We developed an iterative error correction pipeline that runs the previously published String Graph Assembler (SGA) in multiple rounds of k-mer-based correction with an increasing k-mer size, followed by a final round of overlap-based correction. By combining the advantages of small and large k-mers, this approach corrects more errors in repeats and minimizes the total amount of erroneous reads. We show that higher read accuracy increases contig lengths two to three times. We provide SGA-Iteratively Correcting Errors (https://github.com/hillerlab/IterativeErrorCorrection/) that implements iterative error correction by using modules from SGA.
Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly
Sameith, Katrin; Roscito, Juliana G.
2017-01-01
Next-generation sequencers such as Illumina can now produce reads up to 300 bp with high throughput, which is attractive for genome assembly. A first step in genome assembly is to computationally correct sequencing errors. However, correcting all errors in these longer reads is challenging. Here, we show that reads with remaining errors after correction often overlap repeats, where short erroneous k-mers occur in other copies of the repeat. We developed an iterative error correction pipeline that runs the previously published String Graph Assembler (SGA) in multiple rounds of k-mer-based correction with an increasing k-mer size, followed by a final round of overlap-based correction. By combining the advantages of small and large k-mers, this approach corrects more errors in repeats and minimizes the total amount of erroneous reads. We show that higher read accuracy increases contig lengths two to three times. We provide SGA-Iteratively Correcting Errors (https://github.com/hillerlab/IterativeErrorCorrection/) that implements iterative error correction by using modules from SGA. PMID:26868358
Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras.
He, Ying; Liang, Bin; Zou, Yu; He, Jin; Yang, Jun
2017-01-05
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it's difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5-5 m).
Depth Errors Analysis and Correction for Time-of-Flight (ToF) Cameras
He, Ying; Liang, Bin; Zou, Yu; He, Jin; Yang, Jun
2017-01-01
Time-of-Flight (ToF) cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM). Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m). PMID:28067767
Depth Errors Analysis and Correction for Time-of-Flight (ToF Cameras
Ying He
2017-01-01
Full Text Available Time-of-Flight (ToF cameras, a technology which has developed rapidly in recent years, are 3D imaging sensors providing a depth image as well as an amplitude image with a high frame rate. As a ToF camera is limited by the imaging conditions and external environment, its captured data are always subject to certain errors. This paper analyzes the influence of typical external distractions including material, color, distance, lighting, etc. on the depth error of ToF cameras. Our experiments indicated that factors such as lighting, color, material, and distance could cause different influences on the depth error of ToF cameras. However, since the forms of errors are uncertain, it’s difficult to summarize them in a unified law. To further improve the measurement accuracy, this paper proposes an error correction method based on Particle Filter-Support Vector Machine (PF-SVM. Moreover, the experiment results showed that this method can effectively reduce the depth error of ToF cameras to 4.6 mm within its full measurement range (0.5–5 m.
Aita, Takuyo; Ichihashi, Norikazu; Yomo, Tetsuya
2013-12-01
To analyze the evolutionary dynamics of a mutant population in an evolutionary experiment, it is necessary to sequence a vast number of mutants by high-throughput (next-generation) sequencing technologies, which enable rapid and parallel analysis of multikilobase sequences. However, the observed sequences include many errors of base call. Therefore, if next-generation sequencing is applied to analysis of a heterogeneous population of various mutant sequences, it is necessary to discriminate between true bases as point mutations and errors of base call in the observed sequences, and to subject the sequences to error-correction processes. To address this issue, we have developed a novel method of error correction based on the Potts model and a maximum a posteriori probability (MAP) estimate of its parameters corresponding to the "true sequences". Our method of error correction utilizes (1) the "quality scores" which are assigned to individual bases in the observed sequences and (2) the neighborhood relationship among the observed sequences mapped in sequence space. The computer experiments of error correction of artificially generated sequences supported the effectiveness of our method, showing that 50-90% of errors were removed. Interestingly, this method is analogous to a probabilistic model based method of image restoration developed in the field of information engineering.
GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement
Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vinent [Karlsruhe Inst. of Technology (KIT) (Germany)
2011-12-14
In hardware-aware high performance computing, block- asynchronous iteration and mixed precision iterative refinement are two techniques that are applied to leverage the computing power of SIMD accelerators like GPUs. Although they use a very different approach for this purpose, they share the basic idea of compensating the convergence behaviour of an inferior numerical algorithm by a more efficient usage of the provided computing power. In this paper, we want to analyze the potential of combining both techniques. Therefore, we implement a mixed precision iterative refinement algorithm using a block-asynchronous iteration as an error correction solver, and compare its performance with a pure implementation of a block-asynchronous iteration and an iterative refinement method using double precision for the error correction solver. For matrices from theUniversity of FloridaMatrix collection,we report the convergence behaviour and provide the total solver runtime using different GPU architectures.
In situ correction of field errors induced by temperature gradient in cryogenic undulators
Takashi Tanaka
2009-12-01
Full Text Available A new technique of undulator field correction for cryogenic permanent magnet undulators (CPMUs is proposed to correct the phase error induced by temperature gradient. This technique takes advantage of two important instruments: one is the in-vacuum self-aligned field analyzer with laser instrumentation system to precisely measure the distribution of the magnetic field generated by the permanent magnet arrays placed in vacuum, and the other is the differential adjuster to correct the local variation of the magnet gap. The details of the two instruments are described together with the method of how to analyze the field measurement data and deduce the gap variation along the undulator axis. The correction technique was applied to the CPMU with a length of 1.7 m and a magnetic period of 14 mm. It was found that the phase error induced during the cooling process was attributable to local gap variations of around 30 μm, which were then corrected by the differential adjuster.
Sarunya Kanjanawattana
2017-01-01
Full Text Available literature. Extracting graph information clearly contributes to readers, who are interested in graph information interpretation, because we can obtain significant information presenting in the graph. A typical tool used to transform image-based characters to computer editable characters is optical character recognition (OCR. Unfortunately, OCR cannot guarantee perfect results, because it is sensitive to noise and input quality. This becomes a serious problem because misrecognition provides misunderstanding information to readers and causes misleading communication. In this study, we present a novel method for OCR-error correction based on bar graphs using semantics, such as ontologies and dependency parsing. Moreover, we used a graph component extraction proposed in our previous study to omit irrelevant parts from graph components. It was applied to clean and prepare input data for this OCR-error correction. The main objectives of this paper are to extract significant information from the graph using OCR and to correct OCR errors using semantics. As a result, our method provided remarkable performance with the highest accuracies and F-measures. Moreover, we examined that our input data contained less of noise because of an efficiency of our graph component extraction. Based on the evidence, we conclude that our solution to the OCR problem achieves the objectives.
Error correction and diversity analysis of population mixtures determined by NGS.
Wood, Graham R; Burroughs, Nigel J; Evans, David J; Ryabov, Eugene V
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site.
A corrected method of distorted printed circuit board image
Qiao Nao-Sheng; Ye Yu-Tang; Huang Yong-Lin
2011-01-01
This paper proposes a corrected method of distorted image based on adaptive control. First, the adaptive control relationship of pixel point positions between distorted image and its corrected image is given by using polynomial fitting,thus control point pairs between the distorted image and its corrected image are found. Secondly, the value of both image distortion centre and polynomial coefficient is obtained with least square method, thus the relationship of each control point pairs is deduced. In the course of distortion image processing, the gray value of the corrected image is changed into integer with bilinear interpolation. Finally, the experiments are performed to correct two distorted printed circuit board images. The results are perfect and the mean square errors of residual error are tiny.
Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting
Deng, Robert H.; Costello, Daniel J., Jr.
1987-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.
Xin, Dongyue; Sader, C Avery; Chaudhary, Om; Jones, Paul-James; Wagner, Klaus; Tautermann, Christofer S; Yang, Zheng; Busacca, Carl A; Saraceno, Reginaldo A; Fandrick, Keith R; Gonnella, Nina C; Horspool, Keith; Hansen, Gordon; Senanayake, Chris H
2017-05-19
An accurate and efficient procedure was developed for performing (13)C NMR chemical shift calculations employing density functional theory with the gauge invariant atomic orbitals (DFT-GIAO). Benchmarking analysis was carried out, incorporating several density functionals and basis sets commonly used for prediction of (13)C NMR chemical shifts, from which the B3LYP/cc-pVDZ level of theory was found to provide accurate results at low computational cost. Statistical analyses from a large data set of (13)C NMR chemical shifts in DMSO are presented with TMS as the calculated reference and with empirical scaling parameters obtained from a linear regression analysis. Systematic errors were observed locally for key functional groups and carbon types, and correction factors were determined. The application of this process and associated correction factors enabled assignment of the correct structures of therapeutically relevant compounds in cases where experimental data yielded inconclusive or ambiguous results. Overall, the use of B3LYP/cc-pVDZ with linear scaling and correction terms affords a powerful and efficient tool for structure elucidation.
Rakêt, Lars Lau; Søgaard, Jacob; Salmistraro, Matteo;
2012-01-01
. Thereafter methods for exploiting the error-correcting capabilities of the LDPCA code in DVC are investigated. The proposed frame interpolation includes a symmetric flow constraint to the standard forward-backward frame interpolation scheme, which improves quality and handling of large motion. The three......, an average bitrate saving of more than 40% is achieved compared to DISCOVER on Wyner-Ziv frames. In addition we also exploit and investigate the internal error-correcting capabilities of the LDPCA code in order to make it more robust to errors. We investigate how to achieve this goal by only modifying...
Tong, Li; Yang, Cheng; Wu, Po-Yen; Wang, May D
2016-02-01
Sequencing errors are a major issue for several next-generation sequencing-based applications such as de novo assembly and single nucleotide polymorphism detection. Several error-correction methods have been developed to improve raw data quality. However, error-correction performance is hard to evaluate because of the lack of a ground truth. In this study, we propose a novel approach which using ERCC RNA spike-in controls as the ground truth to facilitate error-correction performance evaluation. After aligning raw and corrected RNA-seq data, we characterized the quality of reads by three metrics: mismatch patterns (i.e., the substitution rate of A to C) of reads aligned with one mismatch, mismatch patterns of reads aligned with two mismatches and the percentage increase of reads aligned to reference. We observed that the mismatch patterns for reads aligned with one mismatch are significantly correlated between ERCC spike-ins and real RNA samples. Based on such observations, we conclude that ERCC spike-ins can serve as ground truths for error correction beyond their previous applications for validation of dynamic range and fold-change response. Also, the mismatch patterns for ERCC reads aligned with one mismatch can serve as a novel and reliable metric to evaluate the performance of error-correction tools.
Supporting Dictation Speech Recognition Error Correction: The Impact of External Information
Shi, Yongmei; Zhou, Lina
2011-01-01
Although speech recognition technology has made remarkable progress, its wide adoption is still restricted by notable effort made and frustration experienced by users while correcting speech recognition errors. One of the promising ways to improve error correction is by providing user support. Although support mechanisms have been proposed for…
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
Supporting Dictation Speech Recognition Error Correction: The Impact of External Information
Shi, Yongmei; Zhou, Lina
2011-01-01
Although speech recognition technology has made remarkable progress, its wide adoption is still restricted by notable effort made and frustration experienced by users while correcting speech recognition errors. One of the promising ways to improve error correction is by providing user support. Although support mechanisms have been proposed for…
2011-01-26
... From the Federal Register Online via the Government Publishing Office ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 RIN 2060-AQ66 Determinations Concerning Need for Error Correction, Partial Approval... Determination Concerning the Need for Error Correction, Partial Approval and Partial Disapproval, and...
An upper bound on the number of errors corrected by a convolutional code
Justesen, Jørn
2000-01-01
The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....
Santos, Maria; Lopez-Serrano, Sonia; Manchon, Rosa M.
2010-01-01
Framed in a cognitively-oriented strand of research on corrective feedback (CF) in SLA, the controlled three-stage (composition/comparison-noticing/revision) study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation) on noticing and uptake, as evidenced in the written output produced by a…
Transfer Error and Correction Approach in Mobile Network
Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou
With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.
BayesHammer: Bayesian clustering for error correction in single-cell sequencing.
Nikolenko, Sergey I; Korobeynikov, Anton I; Alekseyev, Max A
2013-01-01
Error correction of sequenced reads remains a difficult task, especially in single-cell sequencing projects with extremely non-uniform coverage. While existing error correction tools designed for standard (multi-cell) sequencing data usually come up short in single-cell sequencing projects, algorithms actually used for single-cell error correction have been so far very simplistic.We introduce several novel algorithms based on Hamming graphs and Bayesian subclustering in our new error correction tool BAYESHAMMER. While BAYESHAMMER was designed for single-cell sequencing, we demonstrate that it also improves on existing error correction tools for multi-cell sequencing data while working much faster on real-life datasets. We benchmark BAYESHAMMER on both k-mer counts and actual assembly results with the SPADES genome assembler.
Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)
2011-11-10
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in
Single-event upset (SEU) in a DRAM with on-chip error correction
Zoutendyk, J. A.; Schwartz, H. R.; Watson, R. K.; Hasnain, Z.; Nevile, L. R.
1987-01-01
Results are given of SEU measurements on 256K dynamic RAMs with on-chip error correction. They are claimed to be the first ever reported. A (12/8) Hamming error-correcting code was incorporated in the layout. Physical separation of the bits in each code word was used to guard against multiple bits being disrupted in any given word. Significant reduction in observed errors is reported.
An effective correction algorithm for off-axis portal dosimetry errors
Bailey, Daniel W.; Kumaraswamy, Lalith; Podgorsak, Matthew B. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States)
2009-09-15
Portal dosimetric images acquired for IMRT pretreatment verification show dose errors of up to 15% near the detector edges as compared to dose predictions calculated by a treatment planning system for these off-axis regions. A method is proposed to account for these off-axis effects by precisely correcting the off-axis output factors, which calibrate the imager for absolute dose. Using this method, agreement between the predicted and the measured doses improves by up to 15% for fields near the detector edges, resulting in passing rate improvements of as much as 60% for gamma evaluation of 3 mm, 3% within the collimator jaws.
An effective correction algorithm for off-axis portal dosimetry errors.
Bailey, Daniel W; Kumaraswamy, Lalith; Podgorsak, Matthew B
2009-09-01
Portal dosimetric images acquired for IMRT pretreatment verification show dose errors of up to 15% near the detector edges as compared to dose predictions calculated by a treatment planning system for these off-axis regions. A method is proposed to account for these off-axis effects by precisely correcting the off-axis output factors, which calibrate the imager for absolute dose. Using this method, agreement between the predicted and the measured doses improves by up to 15% for fields near the detector edges, resulting in passing rate improvements of as much as 60% for gamma evaluation of 3 mm, 3% within the collimator jaws.
The localization and correction of errors in models: a constraint-based approach
Piechowiak, S.; Rodriguez, J
2005-01-01
Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.
Upper bounds on the number of errors corrected by a convolutional code
Justesen, Jørn
2004-01-01
We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error pat...
Repeated quantum error correction by real-time feedback on continuously encoded qubits
Cramer, Julia; Kalb, Norbert; Rol, M. Adriaan; Hensen, Bas; Blok, Machiel S.; Markham, Matthew; Twitchen, Daniel J.; Hanson, Ronald; Taminiau, Tim H.
Because quantum information is extremely fragile, large-scale quantum information processing requires constant error correction. To be compatible with universal fault-tolerant computations, it is essential that quantum states remain encoded at all times and that errors are actively corrected. I will present such active quantum error correction in a hybrid quantum system based on the nitrogen vacancy (NV) center in diamond. We encode a logical qubit in three long-lived nuclear spins, detect errors by multiple non-destructive measurements using the optically active NV electron spin and correct them by real-time feedback. By combining these new capabilities with recent advances in spin control, multiple cycles of error correction can be performed within the dephasing time. We investigate both coherent and incoherent errors and show that the error-corrected logical qubit can indeed store quantum states longer than the best spin used in the encoding. Furthermore, I will present our latest results on increasing the number of qubits in the encoding, required for quantum error correction for both phase- and bit-flip.
Stoup, John R.; Faust, Bryon S.; Doiron, Theodore D.
1998-09-01
One of the most elusive measurement elements in gage block interferometry is the correction for the phase change on reflection. Techniques used to quantify this correction have improved over the year, but the measurement uncertainty has remained relatively constant because some error sources have proven historically difficult to reduce. The precision engineering division at the National Institute of Standards and Technology has recently developed a measurement technique that can quantify the phase change on reflection correction directly for individual gage blocks and eliminates some of the fundamental problems with historical measurement methods. Since only the top surface of the gage block is used in the measurement, wringing film inconsistencies are eliminated with this technique thereby drastically reducing the measurement uncertainty for the correction. However, block geometry and thermal issues still exist. This paper will describe the methods used to minimize the measurement uncertainty of the phase change on reflection evaluation using a spherical contact technique. The work focuses on gage block surface topography and drift eliminating algorithms for the data collection. The extrapolation of the data to an undeformed condition and the failure of these curves to follow theoretical estimates are also discussed. The wavelength dependence of the correction was directly measured for different gage block materials and manufacturers and the data will be presented.
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
The Limits of Error Correction with lp Decoding
Wang, Meng; Tang, Ao
2010-01-01
An unknown vector f in R^n can be recovered from corrupted measurements y = Af + e where A^(m*n)(m>n) is the coding matrix if the unknown error vector e is sparse. We investigate the relationship of the fraction of errors and the recovering ability of lp-minimization (0 < p <= 1) which returns a vector x minimizing the "lp-norm" of y - Ax. We give sharp thresholds of the fraction of errors that determine the successful recovery of f. If e is an arbitrary unknown vector, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. If e has fixed support and fixed signs on the support, the threshold is 2/3 for all p in (0, 1), while the threshold is 1 for l1-minimization.
AXAF Alignment Test System Autocollimating Flat Error Correction
Lewis, Timothy S.
1995-01-01
The alignment test system for the advanced x ray astrophysics facility (AXAF) high-resolution mirror assembly (HRMA) determines the misalignment of the HRMA by measuring the displacement of a beam of light reflected by the HRMA mirrors and an autocollimating flat (ACF). This report shows how to calibrate the system to compensate for errors introduced by the ACF, using measurements taken with the ACF in different positions. It also shows what information can be obtained from alignment test data regarding errors in the shapes of the HRMA mirrors. Simulated results based on measured ACF surface data are presented.
Correction of Errors in Time of Flight Cameras
Jiménez Cabello, David
2015-01-01
En esta tesis se aborda la corrección de errores en cámaras de profundidad basadas en tiempo de vuelo (Time of Flight - ToF). De entre las más recientes tecnologías, las cámaras ToF de modulación continua (Continuous Wave Modulation - CWM) son una alternativa prometedora para la creación de sensores compactos y rápidos. Sin embargo, existen gran variedad de errores que afectan notablemente la medida de profundidad, poniendo en compromiso posibles aplicaciones. La...
T7 Endonuclease I Mediates Error Correction in Artificial Gene Synthesis.
Sequeira, Ana Filipa; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A
2016-09-01
Efficacy of de novo gene synthesis largely depends on the quality of overlapping oligonucleotides used as template for PCR assembly. The error rate associated with current gene synthesis protocols limits the efficient and accurate production of synthetic genes, both in the small and large scales. Here, we analysed the ability of different endonuclease enzymes, which specifically recognize and cleave DNA mismatches resulting from incorrect impairments between DNA strands, to remove mutations accumulated in synthetic genes. The gfp gene, which encodes the green fluorescent protein, was artificially synthesized using an integrated protocol including an enzymatic mismatch cleavage step (EMC) following gene assembly. Functional and sequence analysis of resulting artificial genes revealed that number of deletions, insertions and substitutions was strongly reduced when T7 endonuclease I was used for mutation removal. This method diminished mutation frequency by eightfold relative to gene synthesis not incorporating an error correction step. Overall, EMC using T7 endonuclease I improved the population of error-free synthetic genes, resulting in an error frequency of 0.43 errors per 1 kb. Taken together, data presented here reveal that incorporation of a mutation-removal step including T7 endonuclease I can effectively improve the fidelity of artificial gene synthesis.
The role of extensive recasts in error detection and correction by adult ESL students
Laura Hawkes
2016-03-01
Full Text Available Most of the laboratory studies on recasts have examined the role of intensive recasts provided repeatedly on the same target structure. This is different from the original definition of recasts as the reformulation of learner errors as they occur naturally and spontaneously in the course of communicative interaction. Using a within-group research design and a new testing methodology (video-based stimulated correction posttest, this laboratory study examined whether extensive and spontaneous recasts provided during small-group work were beneficial to adult L2 learners. Participants were 26 ESL learners, who were divided into seven small groups (3-5 students per group, and each group participated in an oral activity with a teacher. During the activity, the students received incidental and extensive recasts to half of their errors; the other half of their errors received no feedback. Students’ ability to detect and correct their errors in the three types of episodes was assessed using two types of tests: a stimulated correction test (a video-based computer test and a written test. Students’ reaction time on the error detection portion of the stimulated correction task was also measured. The results showed that students were able to detect more errors in error+recast (error followed by the provision of a recast episodes than in error-recast (error and no recast provided episodes (though this difference did not reach statistical significance. They were also able to successfully and partially successfully correct more errors in error+recast episodes than in error-recast episodes, and this difference was statistically significant on the written test. The reaction time results also point towards a benefit from recasts, as students were able to complete the task (slightly more quickly for error+recast episodes than for error-recast episodes.
"Ser" and "Estar": Corrective Input to Children's Errors of the Spanish Copula Verbs
Holtheuer, Carolina; Rendle-Short, Johanna
2013-01-01
Evidence for the role of corrective input as a facilitator of language acquisition is inconclusive. Studies show links between corrective input and grammatical use of some, but not other, language structures. The present study examined relationships between corrective parental input and children's errors in the acquisition of the Spanish copula…
Mu, Dapeng; Yan, Haoming; Feng, Wei; Peng, Peng
2017-01-01
Filtering is a necessary step in the Gravity Recovery and Climate Experiment (GRACE) data processing, but leads to signal leakage and attenuation obviously, and adversely affects the quality of global and regional mass change estimates. We propose to use the Tikhonov regularization technique with the L-curve method to solve a correction equation which can reduce the leakage error caused by filter involved in GRACE data processing. We first demonstrate that the leakage error caused by the Gaussian filter can be well corrected by our regularization technique with simulation studies in Greenland and Antarctica. Furthermore, our regularization technique can restore the spatial distribution of original mass changes. For example, after applying the regularization method to GRAEC data (2003-2012), we find that GRACE mass changes tend to move from interior to coastal area in Greenland, which are consistent with recent other studies. After being corrected for glacial isostatic adjustment (GIA) effect, our results show that the ice mass loss rates were 274 ± 30 and 107 ± 34 Gt/yr in Greenland and Antarctica from 2003 to 2012, respectively. And a 10 ± 4 Gt/yr increase rate in Greenland interior is also detected.
ACE: accurate correction of errors using K-mer tries
Sheikhizadeh Anari, S.; Ridder, de D.
2015-01-01
The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to c
M Ethan MacDonald
Full Text Available Volume flow rate (VFR measurements based on phase contrast (PC-magnetic resonance (MR imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC, local polynomial correction (LPC, and whole brain polynomial correction (WBPC.Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically.In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC, 58.4% (LPC and 47.7% (WBPC (p < 0.001 across all schemes. Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997. In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels.While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels.
Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas
2008-01-01
This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty...
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-10-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement.
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-10-26
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit.
LDPC code optimization techniques to improve the error correction threshold
Роман Сергійович Новиков
2015-11-01
Full Text Available Non-empty stopping sets, which are the main reason for achieving a threshold of errors in data transmission channels, are studied. New algorithm of transfer smallest stopping sets and stop distance of any LDPC code is proposed. More functional and flexible technique of splitting-and-filling is proposed. Time for which will be transferred the smallest stopping sets and founded stop distance of any LDPC code is calculated
Pooling designs with surprisingly high degree of error correction in a finite vector space
Guo, Jun
2011-01-01
Pooling designs are standard experimental tools in many biotechnical applications. It is well-known that all famous pooling designs are constructed from mathematical structures by the "containment matrix" method. In particular, Macula's designs (resp. Ngo and Du's designs) are constructed by the containment relation of subsets (resp. subspaces) in a finite set (resp. vector space). Recently, we generalized Macula's designs and obtained a family of pooling designs with more high degree of error correction by subsets in a finite set. In this paper, as a generalization of Ngo and Du's designs, we study the corresponding problems in a finite vector space and obtain a family of pooling designs with surprisingly high degree of error correction. Our designs and Ngo and Du's designs have the same number of items and pools, respectively, but the error-tolerant property is much better than that of Ngo and Du's designs, which was given by D'yachkov et al. \\cite{DF}, when the dimension of the space is large enough.
Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna
2015-05-01
Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.
A novel method of dynamic correction in the time domain
Hessling, J. P.
2008-07-01
The dynamic error of measured signals is sometimes unacceptably large. If the dynamic properties of the measurement system are known, the true physical signal may to some extent be re-constructed. With a parametrized characterization of the system and sampled signals, time-domain digital filters may be utilized for correction. In the present work a general method for synthesizing such correction filters is developed. It maps the dynamic parameters of the measurement system directly on to the filter coefficients and utilizes time reversed filtering. This avoids commonly used numerical optimization in the filter synthesis. The method of correction is simple with absolute repeatability and stability, and results in a low residual error. Explicit criteria to control both the horizontal (time) and vertical (amplitude) discretization errors are presented in terms of the utilization of bandwidth and noise gain, respectively. To evaluate how close to optimal the correction is, these errors are also formulated in relation to the signal-to-noise ratio of the original measurement system. For purposes of illustration, typical mechanical and piezo-electric transducer systems for measuring force, pressure or acceleration are simulated and dynamically corrected with such dedicated digital filters.
Zhang, Jingfu; Laflamme, Raymond; Suter, Dieter
2012-09-07
Large-scale universal quantum computing requires the implementation of quantum error correction (QEC). While the implementation of QEC has already been demonstrated for quantum memories, reliable quantum computing requires also the application of nontrivial logical gate operations to the encoded qubits. Here, we present examples of such operations by implementing, in addition to the identity operation, the NOT and the Hadamard gate to a logical qubit encoded in a five qubit system that allows correction of arbitrary single-qubit errors. We perform quantum process tomography of the encoded gate operations, demonstrate the successful correction of all possible single-qubit errors, and measure the fidelity of the encoded logical gate operations.
Quantum Error-Correction-Enhanced Magnetometer Overcoming the Limit Imposed by Relaxation.
Herrera-Martí, David A; Gefen, Tuvia; Aharonov, Dorit; Katz, Nadav; Retzker, Alex
2015-11-13
When incorporated in quantum sensing protocols, quantum error correction can be used to correct for high frequency noise, as the correction procedure does not depend on the actual shape of the noise spectrum. As such, it provides a powerful way to complement usual refocusing techniques. Relaxation imposes a fundamental limit on the sensitivity of state of the art quantum sensors which cannot be overcome by dynamical decoupling. The only way to overcome this is to utilize quantum error correcting codes. We present a superconducting magnetometry design that incorporates approximate quantum error correction, in which the signal is generated by a two qubit Hamiltonian term. This two-qubit term is provided by the dynamics of a tunable coupler between two transmon qubits. For fast enough correction, it is possible to lengthen the coherence time of the device beyond the relaxation limit.
张少强; 李醒飞; 吴腾飞; 纪越; 徐梦洁; 陈诚
2015-01-01
为了改善磁流体动力学（MHD）角速度传感器的低频性能，建立了传感器理论误差模型并设计了一种自适应卡尔曼算法。该方法根据被测角速度频率变化自动调整滤波器过程噪声和量测噪声参数，实现对传感器低频误差的动态补偿。对校正前后的传感器性能进行了对比实验，实验结果表明校正后传感器在低频区（<1Hz）的误差较之前降低了约90%，证明所设计的方法可用于提高MHD角速度传感器的低频性能%To improve the low frequency performance of a Magnetohydrodynamics(MHD)angular rata sensor,a the⁃oretical error model of the sensor is established and an adaptive Kalman filtering algorithm is proposed. The algo⁃rithm can automatically change the parameters of process noise and measurement noise according to the frequency of angular rate and compensates the error of the MHD sensor dynamically. A comparative experiment is made and the result shows that the MHD sensor’s relative error in low frequency region(<1 Hz)is decreased by 90%.The er⁃ror correction algorithm proposed is suitable to improve the MHD sensor’s performance in low frequency region.
Motion error correction approach for high-resolution synthetic aperture radar imaging
Jia, Gaowei; Chang, Wenge; Li, Xiangyang
2014-01-01
An innovative data-based motion compensation approach is proposed for the high-resolution synthetic aperture radar (SAR). The main idea is to extract the displacements in line-of-sight direction and the range-dependent phase errors from raw data, based on an instantaneous Doppler rate estimate. The approach is implemented by a two-step process: (1) the correction of excessive range cell migration; (2) the compensation of range-dependent phase errors. Experimental results show that the proposed method is capable of producing high-resolution SAR imagery with a spatial resolution of 0.17×0.2 m2 (range×azimuth) in Ku band.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Low-cost ultrasonic distance sensor arrays with networked error correction.
Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou
2013-09-05
Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography
Müller, P.; Hiller, Jochen; Dai, Y.;
2015-01-01
and repeatability of dimensional and geometrical measurements. The aim of this paper is to discuss different methods for the correction of scaling errors and to quantify their influence on dimensional measurements. Scaling errors occur first and foremost in CT systems with no built-in compensation of positioning...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...... geometry and is made of brass, which makes its measurements with CT challenging. It is shown that each scaling error correction method results in different deviations between CT measurements and reference measurements from a CMM. Measurement uncertainties were estimated for each method, taking...
A Histogram-Based Static-Error Correction Technique for Flash ADCs
Armin Jalili; J Jacob Wikner; Sayed Masoud Sayedi; Rasoul Dehghani
2011-01-01
High-speed, high-accuracy data converters are attractive for use in most RF applications. Such converters allow direct conversion to occur between the digital baseband and the antenna. However, high speed and high accuracy make the analog components in a converter more complex, and this complexity causes more power to be dissipated than if a traditional approach were taken. A static calibration technique for flash analog-to-digital converters （ADCs） is discussed in this paper. The calibration is based onhistogram test methods, and equivalent errors in the flash ADC comparators are estimated in the digital domain without any significant changes being made to the ADC comparators. In the trimming process, reference voltages are adjusted to compensate for static errors. Behavioral-level simulations of a moderate-resolution 8-bit flash ADC show that, for typical errors, ADC performance is considerably improved by the proposed technique. As a result of calibration, the differential no.nlinearities （DNLs） are reduced on average from 4 LSB to 0.5 LSB, and the integral nonlinearities （INLs） are reduced on average from 4.2 LSB to 0.35 LSB. Implementation issues for this proposed technique are discussed in our subsequent paper, “A Histogram-Based Static-Error Correction Technique for Flash ADCs： Implementation Aspects. ”
Seyyedi, H.; Kaheil, Y.; Anagnostou, E. N.; McCollum, J.; Beighley, E.
2013-12-01
Deriving flood maps requires an accurate characterization of precipitation variability at high spatio-temporal resolution. Most of the available global-scale gridded precipitation products are available at resolutions (e.g., 25 km) not directly applicable to flood modeling. An error correction and spatial downscaling method based on a two-dimensional satellite rainfall error model (SREM2D) is tested in this study based on a long-term (2001-2010) dataset. Specifically, the model is applied on two rainfall datasets: a satellite precipitation product (TRMM-3B42.V7 at 0.25 degree) and a global land-atmosphere re-analysis product (GLDAS-CLM at 1 degree), to produce error corrected rainfall ensembles at 0.05 degree spatial resolution. The NCEP hourly, 4-km resolution multi-sensor precipitation product (WSR-88D stage IV gauge-adjusted radar-rainfall product) is used as the reference rainfall dataset. The Hillslope River Routing (HRR) hydrologic model is forced with the downscaled ensemble rainfall data to produce an ensemble of runoff values. The Susquehanna River basin is the study area, consisting of 1000 sub-basins ranging from 39 to 67,000 square kilometers including complex terrain and high latitude locations. There are 437 significant storm events selected over the study area based on the 10-year database. The analysis performed is based on 60 percent of events in each season kept for model calibration and 40 percent for validation. The statistical analysis consists of two parts: (1) evaluation of error metrics (relative standard deviation and efficiency coefficient) to quantify improvements in rainfall and runoff simulations as function of basin size and storm severity, and (2) ensemble verification (exceedance probability and mean uncertainty ratio) of the rainfall and runoff ensembles to assess the accuracy of the ensemble-based uncertainty characterization. The study investigates how well the ensemble of downscaled and error-corrected rainfall data performs
An Empirical Study of End-User Behaviour in Spreadsheet Error Detection & Correction
Bishop, Brian
2008-01-01
Very little is known about the process by which end-user developers detect and correct spreadsheet errors. Any research pertaining to the development of spreadsheet testing methodologies or auditing tools would benefit from information on how end-users perform the debugging process in practice. Thirteen industry-based professionals and thirty-four accounting & finance students took part in a current ongoing experiment designed to record and analyse end-user behaviour in spreadsheet error detection and correction. Professionals significantly outperformed students in correcting certain error types. Time-based cell activity analysis showed that a strong correlation exists between the percentage of cells inspected and the number of errors corrected. The cell activity data was gathered through a purpose written VBA Excel plug-in that records the time and detail of all cell selection and cell change actions of individuals.
Oflazer, K
1995-01-01
Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition allows misspelled input word forms to be corrected, and morphologically analyzed concurrently. We present an application of this to error-tolerant analysis of agglutinative morphology of Turkish words. The algorithm can be applied to morphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes and morphographemic phenomena involved. In the context of spelling correction, error-tolerant recognition can be...
XTile: An Error-Correction Package for DNA Self-Assembly
Chaurasia, Anshul; Jain, Prateek; Gupta, Manish K
2009-01-01
Self assembly is a process by which supramolecular species form spontaneously from their components. This process is ubiquitous throughout the life chemistry and is central to biological information processing. It has been predicted that in future self assembly will become an important engineering discipline by combining the fields of bio molecular computation, nano technology and medicine. However error control is a key challenge in realizing the potential of self assembly. Recently many authors have proposed several combinatorial error correction schemes to control errors which have a close analogy with the coding theory such as Winfree s proofreading scheme and its generalizations by Chen and Goel and compact scheme of Reif, Sahu and Yin. In this work, we present an error correction computational tool XTile that can be used to create input files to the Xgrow simulator of Winfree by providing the design logic of the tiles and it also allows the user to apply proofreading, snake and compact error correction ...
Model based correction of placement error in EBL and its verification
Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro
2016-05-01
In maskmaking, the main source of error contributing to placement error is charging. DISPLACE software corrects the placement error for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. One important step is the calibration of physical model. A test layout on a single calibration mask was used for calibration. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for the verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE. A good correlation of the measured and predicted values of the correction confirmed the high accuracy of the charging placement error correction.
屈光不正的矫正与飞行%Refractive error correction and flying
杨国庆; 张作明
2011-01-01
目的 综述屈光不正的矫正方法及其与飞行关系的研究进展.资料来源与选择 该领域的相关研究论文、研究报告与专著.资料引用 国内外公开发表的论文和著作56篇.资料综合 阐述屈光不正的各种矫正方法在飞行人员中应用的优缺点,着重阐述了对飞行人员进行角膜屈光手术的应用可行性.结论 相对于其他矫正方法,对飞行人员进行角膜屈光手术具有较好的应用前景.目前国外民航飞行人员以准分子激光原位角膜磨镶术(laser in situ keratomileusis,LASIK)为主,而军航飞行人员则以准分子激光角膜切削术(photorefractive kerateetomy,PRK)为主；美国军航飞行人员已被允许进行飞秒激光LASIK手术,对我军飞行人员有借鉴意义.%Objective To review the development subject to the researches of refractive error correction and its effects to flying.Literature resource and selection Research papers,study reports and monographs in this field.Literature quotation Fifty-six papers and writings in this field were cited.Literature synthesis The advantages and disadvantages of applying refractive error correction for the flying personnel were discussed.Application feasibility of corneal refractive surgery was emphasized.Conclusions Comparing with other correction methods,such as glasses wearing,corneal contact lens and intra-ocular refraction surgery,corneal refractive surgery would have a prospect result for the flying personnel.At present,many foreign civilian flying personnel choose laser in situ keratomileusis (LASIK) as the method of correction while military flying personnel choose photorefractive keratectomy (PRK).Femtosecond laser LASIK has been approved in refractive error correction of American military flying personnel,and those lessons would be consulted to Chinese military flying personnel.
From quantum feedback to probabilistic error correction: manipulation of quantum beats in cavity QED
Barberis-Blostein, P [Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, Ciudad Universitaria, 04510, Mexico, DF (Mexico); Norris, D G; Orozco, L A; Carmichael, H J [Joint Quantum Institute, Department of Physics, University of Maryland and National Institute of Standards and Technology, College Park, MD 20742 (United States)], E-mail: lorozco@umd.edu
2010-02-15
It is shown how one can implement quantum feedback and probabilistic error correction in an open quantum system consisting of a single atom, with ground- and excited-state Zeeman structure, in a driven two-mode optical cavity. The ground-state superposition is manipulated and controlled through conditional measurements and external fields, which shield the coherence and correct quantum errors. Modeling an experimentally realistic situation demonstrates the robustness of the proposal for realization in the laboratory.
Santaguida, Stefano; Vernieri, Claudio; Villa, Fabrizio; Ciliberto, Andrea; Musacchio, Andrea
2011-01-01
Fidelity of chromosome segregation is ensured by a tension-dependent error correction system that prevents stabilization of incorrect chromosome–microtubule attachments. Unattached or incorrectly attached chromosomes also activate the spindle assembly checkpoint, thus delaying mitotic exit until all chromosomes are bioriented. The Aurora B kinase is widely recognized as a component of error correction. Conversely, its role in the checkpoint is controversial. Here, we report an analysis of the...
High-speed parallel forward error correction for optical transport networks
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert;
2010-01-01
This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology.......This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....
Bulk Locality and Quantum Error Correction in AdS/CFT
Almheiri, Ahmed; Dong, Xi; Harlow, Daniel
2014-01-01
We point out a connection between the emergence of bulk locality in AdS/CFT and the theory of quantum error correction. Bulk notions such as Bogoliubov transformations, location in the radial direction, and the holographic entropy bound all have natural CFT interpretations in the language of quantum error correction. We also show that the question of whether bulk operator reconstruction works only in the causal wedge or all the way to the extremal surface is related to the question of whether...
Error Correction in Latent Inhibition and its Disruption by Opioid Receptor Blockade with Naloxone
Leung, Hiu T; Killcross, A S; Westbrook, R. Frederick
2013-01-01
Latent inhibition refers to the retardation in the development of conditioned responding when a pre-exposed stimulus is used to signal an unconditioned stimulus. This effect is described by error-correction models as an attentional deficit and is commonly used as an animal model of schizophrenia. A series of experiments studied the role of error-correction mechanism in latent inhibition and its interaction with the endogenous opioid system. Systemic administration of the competitive opioid re...
Lin, Tengjiao; He, Zeyin
2017-07-01
We present a method for analyzing the transmission error of helical gear system with errors. First a finite element method is used for modeling gear transmission system with machining errors, assembly errors, modifications and the static transmission error is obtained. Then the bending-torsional-axial coupling dynamic model of the transmission system based on the lumped mass method is established and the dynamic transmission error of gear transmission system is calculated, which provides error excitation data for the analysis and control of vibration and noise of gear system.
Tulpan, Dan; Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
Gold price effect on stock market: A Markov switching vector error correction approach
Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok
2014-06-01
Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.
Using History to Teach Scientific Method: The Role of Errors
Giunta, Carmen J.
2001-05-01
Including tales of error along with tales of discovery is desirable in any use of history of science to teach about science. Tales of error, particularly when they involve justly well-regarded historical figures, serve to avoid two pitfalls to which use of historical material in science teaching is otherwise susceptible. Acknowledging the false steps of great scientists avoids putting those scientists on a pedestal and illustrates that there is no automatic or mechanical scientific method. This paper lists five kinds of error with examples of each from the development of chemistry in the 18th and 19th centuries: erroneous theories (such as phlogiston), seeing a new phenomenon everywhere one seeks it (e.g., Lavoisier and the decomposition of water), theories erroneous in detail but nonetheless fruitful (e.g., Dalton's atomic theory), rejection of correct theories (e.g., Avogadro's hypothesis), and incoherent insights (e.g., J. A. R. Newlands' classification of the elements).
CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD
BUSUIOCEANU STELIANA
2013-08-01
Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.
2010-06-15
... AGENCY 40 CFR Part 228 Ocean Dumping; Correction of Typographical Error in 2006 Federal Register Final... typographical error in the Final Rule for the Ocean Dumping; De-designation of Ocean Dredged Material Disposal... amended by revising paragraphs (n)(3) and (n)(4) to read as follows: Sec. 228.15 Dumping sites...
Righting Writing: What the Social Accomplishment of Error Correction Tells about School Literacy
Davidson, Christina
2009-01-01
School literacy has been identified with specific ways of talking about texts, especially during teacher-led lessons. This paper considers school literacy through a focus on talk about error correction during a time of individual writing activity in an early years classroom. Conversation Analysis is used to develop descriptions of error correction…
On the Benefit of Forward Error Correction at IEEE 802.11 Link Layer Level
Nee, van Floris; Boer, de Pieter-Tjerk
2011-01-01
This study examines the error distribution of aggregated MPDUs in 802.11n networks and whether or not forward error correction like raptor coding at the link layer would be useful in these networks. Several experiments with Qualcomm 4x4 802.11n hardware were performed. Two devices were used in a dat
Xingming Sun
2015-07-01
Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Preferences of ELT Learners in the Correction of Oral Vocabulary and Pronunciation Errors
Ustaci, Hale Yayla; Ok, Selami
2014-01-01
Vocabulary is an essential component of language teaching and learning process, and correct pronunciation of lexical items is an ultimate goal for language instructors in ELT programs. Apart from how lexical items should be taught, the way teachers correct oral vocabulary errors as well as those of pronunciation in line with the preferences of…
Quantum error correction of continuous-variable states against Gaussian noise
Ralph, T. C. [Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072 (Australia)
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
Did I say dog or cat? A study of semantic error detection and correction in children.
Hanley, J Richard; Cortis, Cathleen; Budd, Mary-Jane; Nozari, Nazbanou
2016-02-01
Although naturalistic studies of spontaneous speech suggest that young children can monitor their speech, the mechanisms for detection and correction of speech errors in children are not well understood. In particular, there is little research on monitoring semantic errors in this population. This study provides a systematic investigation of detection and correction of semantic errors in children between the ages of 5 and 8years as they produced sentences to describe simple visual events involving nine highly familiar animals (the moving animals task). Results showed that older children made fewer errors and corrected a larger proportion of the errors that they made than younger children. We then tested the prediction of a production-based account of error monitoring that the strength of the language production system, and specifically its semantic-lexical component, should be correlated with the ability to detect and repair semantic errors. Strength of semantic-lexical mapping, as well as lexical-phonological mapping, was estimated individually for children by fitting their error patterns, obtained from an independent picture-naming task, to a computational model of language production. Children's picture-naming performance was predictive of their ability to monitor their semantic errors above and beyond age. This relationship was specific to the strength of the semantic-lexical part of the system, as predicted by the production-based monitor.
Analysis and correction of the machining errors of small plastic helical gears by ball-end milling
Gao Sande; Huang Loulin; and Han Baoling
2012-01-01
Many small-size precise plastic helical involutes gears are used in electrical appliances to transmit rotary movements con- tinuously and smoothly. Ball-end milling is an effective method for trial manufacture or small batch production of this type of gear, but the precision of the gear is usually low. In this research, the main sources of the errors of the gear, machining errors of the tooth profile and trace of the gear obtained were analyzed. The correction amounts for these errors are then determined by using a CNC gear tester. They are used to generate a new 3D-CAD model for gear machining with better nrecision.
Extension of Knuth's balancing algorithm with error correction
Weber, J.H.; Schouhamer Immink, K.A.; Ferreira, H.C.
2011-01-01
Knuth's celebrated balancing method consists of inverting the first z bits in a binary information sequence, such that the resulting sequence has as many ones as zeroes, and communicating the index z to the receiver through a short balanced prefix. In the proposed method, Knuth's scheme is extended
Wavelet based error correction and predictive uncertainty of a hydrological forecasting system
Bogner, Konrad; Pappenberger, Florian; Thielen, Jutta; de Roo, Ad
2010-05-01
River discharge predictions most often show errors with scaling properties of unknown source and statistical structure that degrade the quality of forecasts. This is especially true for lead-time ranges greater then a few days. Since the European Flood Alert System (EFAS) provides discharge forecasts up to ten days ahead, it is necessary to take these scaling properties into consideration. For example the range of scales for the error that occurs at the spring time will be caused by long lasting snowmelt processes, and is by far larger then the error, that appears during the summer period and is caused by convective rain fields of short duration. The wavelet decomposition is an excellent way to provide the detailed model error at different levels in order to estimate the (unobserved) state variables more precisely. A Vector-AutoRegressive model with eXogenous input (VARX) is fitted for the different levels of wavelet decomposition simultaneously and after predicting the next time steps ahead for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The Bayesian Uncertainty Processor (BUP) developed by Krzysztofowicz is an efficient method to estimate the full predictive uncertainty, which is derived by integrating the hydrological model uncertainty and the meteorological input uncertainty. A hydrological uncertainty processor has been applied to the error corrected discharge series at first in order to derive the predictive conditional distribution under the hypothesis that there is no input uncertainty. The uncertainty of the forecasted meteorological input forcing the hydrological model is derived from the combination of deterministic weather forecasts and ensemble predictions systems (EPS) and the Input Processor maps this input uncertainty into the output uncertainty under the hypothesis that there is no hydrological uncertainty. The main objective of this Bayesian forecasting system
Hindasageri, V; Vedula, R P; Prabhu, S V
2013-02-01
Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.
Hindasageri, V.; Vedula, R. P.; Prabhu, S. V.
2013-02-01
Temperature measurement by thermocouples is prone to errors due to conduction and radiation losses and therefore has to be corrected for precise measurement. The temperature dependent emissivity of the thermocouple wires is measured by the use of thermal infrared camera. The measured emissivities are found to be 20%-40% lower than the theoretical values predicted from theory of electromagnetism. A transient technique is employed for finding the heat transfer coefficients for the lead wire and the bead of the thermocouple. This method does not require the data of thermal properties and velocity of the burnt gases. The heat transfer coefficients obtained from the present method have an average deviation of 20% from the available heat transfer correlations in literature for non-reacting convective flow over cylinders and spheres. The parametric study of thermocouple error using the numerical code confirmed the existence of a minimum wire length beyond which the conduction loss is a constant minimal. Temperature of premixed methane-air flames stabilised on 16 mm diameter tube burner is measured by three B-type thermocouples of wire diameters: 0.15 mm, 0.30 mm, and 0.60 mm. The measurements are made at three distances from the burner tip (thermocouple tip to burner tip/burner diameter = 2, 4, and 6) at an equivalence ratio of 1 for the tube Reynolds number varying from 1000 to 2200. These measured flame temperatures are corrected by the present numerical procedure, the multi-element method, and the extrapolation method. The flame temperatures estimated by the two-element method and extrapolation method deviate from numerical results within 2.5% and 4%, respectively.
Correcting for GPS Multipath Error in LIDAR Surveys Using Crossover Analysis
Borsa, A. A.; Bills, B. G.; Fricker, H. A.; Minster, J. B.
2003-12-01
The quality of the range measurement from an airborne Light Detection and Ranging (LIDAR) survey is largely dependent on the accuracy of the GPS trajectory for the aircraft. GPS elevation error - which today is largely due to multipath effects at the aircraft and the GPS base station - contributes a major portion of the LIDAR vertical error budget. The usual practice of quoting an RMS value for the GPS component of the error budget implies that GPS noise is Gaussian, yet the true nature of the noise signal is time-varying with significant power at long periods. GPS noise with a 3-cm RMS can easily have more than 10 cm of total variability on a time scale of tens of minutes to several hours. We show examples from an airborne LIDAR survey over the open-pit Hector Mine where repeated flyovers of an area used for ground truth revealed large elevation biases between passes that could not be resolved by adjusting the (non-GPS) parameters of the LIDAR system. As part of the post-processing of a large kinematic GPS survey of the salar de Uyuni, Bolivia, we have developed an algorithm to correct time-varying GPS error using elevation mismatches at crossovers between vehicle paths. The survey was originally designed to incorporate a large number of crossovers for the purpose of determining survey repeatability, and we were later able to exploit the crossover difference observations to solve for a model of the actual error signal generating those differences. We give results from tests with synthetic noise and topography data indicating that this method removes more than two-thirds of the added noise from the topographic signal, and we show the excellent results obtained for the salar de Uyuni survey data. We believe that airborne LIDAR surveys incorporating crossovers at regular intervals can also benefit from the application of this algorithm.
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
OCR Context-Sensitive Error Correction Based on Google Web 1T 5-Gram Data Set
Bassil, Youssef
2012-01-01
Since the dawn of the computing era, information has been represented digitally so that it can be processed by electronic computers. Paper books and documents were abundant and widely being published at that time; and hence, there was a need to convert them into digital format. OCR, short for Optical Character Recognition was conceived to translate paper-based books into digital e-books. Regrettably, OCR systems are still erroneous and inaccurate as they produce misspellings in the recognized text, especially when the source document is of low printing quality. This paper proposes a post-processing OCR context-sensitive error correction method for detecting and correcting non-word and real-word OCR errors. The cornerstone of this proposed approach is the use of Google Web 1T 5-gram data set as a dictionary of words to spell-check OCR text. The Google data set incorporates a very large vocabulary and word statistics entirely reaped from the Internet, making it a reliable source to perform dictionary-based erro...
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc
2012-01-01
and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon...... emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy....
Waugh, Rebecca E.; Alberto, Paul A.; Fredrick, Laura D.
2011-01-01
Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…
Serialized quantum error correction protocol for high-bandwidth quantum repeaters
Glaudell, A. N.; Waks, E.; Taylor, J. M.
2016-09-01
Advances in single-photon creation, transmission, and detection suggest that sending quantum information over optical fibers may have losses low enough to be correctable using a quantum error correcting code (QECC). Such error-corrected communication is equivalent to a novel quantum repeater scheme, but crucial questions regarding implementation and system requirements remain open. Here we show that long-range entangled bit generation with rates approaching 108 entangled bits per second may be possible using a completely serialized protocol, in which photons are generated, entangled, and error corrected via sequential, one-way interactions with as few matter qubits as possible. Provided loss and error rates of the required elements are below the threshold for quantum error correction, this scheme demonstrates improved performance over transmission of single photons. We find improvement in entangled bit rates at large distances using this serial protocol and various QECCs. In particular, at a total distance of 500 km with fiber loss rates of 0.3 dB km-1, logical gate failure probabilities of 10-5, photon creation and measurement error rates of 10-5, and a gate speed of 80 ps, we find the maximum single repeater chain entangled bit rates of 51 Hz at a 20 m node spacing and 190 000 Hz at a 43 m node spacing for the {[[3,1,2
Quantum Error Correction and the Future of Solid State Quantum Computing
Divincenzo, David
Quantum error correction (QEC) theory has provided a very challenging but well defined goal for the further development of solid state qubit systems: achieve high enough fidelity so that fault-tolerant, error-corrected quantum computation in networks of these qubits becomes possible. I will begin by touching on some historical points: initial work on QEC is actually more than 20 years old, and the landmark work of Kitaev in 1996 which established 2D lattice structures as a suitable host for effective error correction, has its roots in theoretical work in many-body theory from Wegner in the 1970s. I will give some perspective on current developments in the implementation of small fragments of the surface code. The surface-code concept has driven a number of distinct requirements, beyond the reduction of error rates below the 1% range, that are actively considered as experiments are scaled beyond the 10-qubit level. Support of JARA FIT is acknolwedged.
NxRepair: error correction in de novo sequence assembly using Nextera mate pairs.
Murphy, Rebecca R; O'Connell, Jared; Cox, Anthony J; Schulz-Trieglaff, Ole
2015-01-01
Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.
Local concurrent error detection and correction in data structures using virtual backpointers
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
NxRepair: error correction in de novo sequence assembly using Nextera mate pairs
Rebecca R. Murphy
2015-06-01
Full Text Available Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.
Error Field Correction in DIII-D Ohmic Plasmas With Either Handedness
Park, Jong-Kyu; Schaffer, Micahel J.; La Haye, Robert J.; Scoville, Timothy J.; Menard, Jonathon E.
2011-05-16
Error field correction results in DIII-D plasmas are presented in various configurations. In both left-handed and right-handed plasma configurations, where the intrinsic error fields become different due to the opposite helical twist (handedness) of the magnetic field, the optimal error correction currents and the toroidal phases of internal(I)-coils are empirically established. Applications of the Ideal Perturbed Equilibrium Code to these results demonstrate that the field component to be minimized is not the resonant component of the external field, but the total field including ideal plasma responses. Consistency between experiment and theory has been greatly improved along with the understanding of ideal plasma responses, but non-ideal plasma responses still need to be understood to achieve the reliable predictability in tokamak error field correction.
Error-correction coding and decoding bounds, codes, decoders, analysis and applications
Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak
2017-01-01
This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...
Hu, Hao; Andersen, Jakob Dahl; Rasmussen, Anders
2013-01-01
We build a forward error correction (FEC) module and implement it in an optical signal processing experiment. The experiment consists of two cascaded nonlinear optical signal processes, 160 Gbit/s all optical wavelength conversion based on the cross phase modulation (XPM) in a silicon nanowire...... and subsequent 160 Gbit/s-to-10 Gbit/s demultiplexing in a highly nonlinear fiber (HNLF). The XPM based all optical wavelength conversion in silicon is achieved by off-center filtering the red shifted sideband on the CW probe. We thoroughly demonstrate and verify that the FEC code operates correctly after...... the optical signal processing, yielding truly error-free 150 Gbit/s (excl. overhead) optically signal processed data after the two cascaded nonlinear processes. © 2013 Optical Society of America....
Nugent, William R.
1987-01-01
We examine the principal systems of Error Detection and Correction (EDAC) which have been recently proposed as U.S. standards for optical disks and discuss the the two principal methodologies employed: Reed-Solomon Codes and Product Codes, and describe the variations in their operating characteristics and their overhead in disk space. We then present current knowledge of the nature of defect distributions on optical media including bit error rates, the incidence and extents of clustered errors and burst errors, and the controversial aspects of correlation between these forms of error. We show that if such forms are correlated then stronger EDAC systems are needed than if they are not. We discuss the nature of defect growth over time and its likely causes, and present the differing views on the growth of burst errors including nucleation and incubation effects which are not detectable in new media. We exhibit a mathematical model of a currently proposed end-of-life defect distribution for write once media and discuss its implications in EDAC selection. We show that standardization of an EDAC system unifies the data recording process and is permissive to data interchange, but that enhancements in EDAC computation during reading can achieve higher than normal EDAC performance, though sometimes at the expense of decoding time. Finally we examine vendor estimates of disk longevity and possible means of life extension where archival recording is desired.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
High Speed Versatile Reed-Solomon Decoder for Correcting Errors and Erasures
WANG Hua; FAN Guang-rong; WANG Ping-qin; KUANG Jing-ming
2008-01-01
A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xc2v1000-5 and is used by concatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3Gbit/s.
Methodology for bus layout for topological quantum error correcting codes
Wosnitzka, Martin; Pedrocchi, Fabio L.; DiVincenzo, David P. [RWTH Aachen University, JARA Institute for Quantum Information, Aachen (Germany)
2016-12-15
Most quantum computing architectures can be realized as two-dimensional lattices of qubits that interact with each other. We take transmon qubits and transmission line resonators as promising candidates for qubits and couplers; we use them as basic building elements of a quantum code. We then propose a simple framework to determine the optimal experimental layout to realize quantum codes. We show that this engineering optimization problem can be reduced to the solution of standard binary linear programs. While solving such programs is a NP-hard problem, we propose a way to find scalable optimal architectures that require solving the linear program for a restricted number of qubits and couplers. We apply our methods to two celebrated quantum codes, namely the surface code and the Fibonacci code. (orig.)
Beex-Oosterhuis, Marieke M; de Vogel, Ed M; van der Sijs, Heleen; Dieleman, Hetty G; van den Bemt, Patricia M L A
2013-12-01
Hospital pharmacists and pharmacy technicians play a major role in detecting prescribing errors by medication surveillance. At present the frequency of detected and correctly handled prescribing errors is unclear, as are factors associated with correct handling. To examine the frequency of detection of prescribing errors and the frequency of correct handling, as well as factors associated with correct handling of prescribing errors by hospital pharmacists and pharmacy technicians. This study was conducted in 57 Dutch hospital pharmacies. Prospective observational study with test patients, using a case-control design to identify factors associated with correct handling. A questionnaire was used to collect the potential factors. Test patients containing prescribing errors were developed by an expert panel of hospital pharmacists (a total of 40 errors in nine medication records divided among three test patients; each test patient was used in 3 rounds; on average 4.5 prescribing error per patient per round). Prescribing errors were defined as dosing errors or therapeutic errors (contra-indication, drug-drug interaction, (pseudo)duplicate medication). The errors were selected on relevance and unequivocalness. The panel also defined how the errors should be handled in practice using national guidelines and this was defined as 'correct handling'. The test patients had to be treated as real patients while conducting medication surveillance. The pharmacists and technicians were asked to report detected errors to the investigator. The percentages of detected and correctly handled prescribing errors were the main outcome measures. Factors associated with correct handling were determined, using multivariate logistic regression analysis. Fifty-nine percent of the total number of intentionally added prescribing errors were detected and 57 % were handled correctly by the hospital pharmacists and technicians. The use of a computer system for medication surveillance compared to no
Bias correction methods for decadal sea-surface temperature forecasts
Balachandrudu Narapusetty
2014-04-01
Full Text Available Two traditional bias correction techniques: (1 systematic mean correction (SMC and (2 systematic least-squares correction (SLC are extended and applied on sea-surface temperature (SST decadal forecasts in the North Pacific produced by Climate Forecast System version 2 (CFSv2 to reduce large systematic biases. The bias-corrected forecast anomalies exhibit reduced root-mean-square errors and also significantly improve the anomaly correlations with observations. The spatial pattern of the SST anomalies associated with the Pacific area average (PAA index (spatial average of SST anomalies over 20°–60°N and 120°E–100°W is improved after employing the bias correction methods, particularly SMC. Reliability diagrams show that the bias-corrected forecasts better reproduce the cold and warm events well beyond the 5-yr lead-times over the 10 forecasted years. The comparison between both correction methods indicates that: (1 prediction skill of SST anomalies associated with the PAA index is improved by SMC with respect to SLC and (2 SMC-derived forecasts have a slightly higher reliability than those corrected by SLC.
Quantifying geocode location error using GIS methods
Gardner Bennett R
2007-04-01
Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage
SimCommSys: taking the errors out of error-correcting code simulations
Johann A. Briffa
2014-06-01
Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.
Quantum error correction in a solid-state hybrid spin register.
Waldherr, G; Wang, Y; Zaiser, S; Jamali, M; Schulte-Herbrüggen, T; Abe, H; Ohshima, T; Isoya, J; Du, J F; Neumann, P; Wrachtrup, J
2014-02-13
Error correction is important in classical and quantum computation. Decoherence caused by the inevitable interaction of quantum bits with their environment leads to dephasing or even relaxation. Correction of the concomitant errors is therefore a fundamental requirement for scalable quantum computation. Although algorithms for error correction have been known for some time, experimental realizations are scarce. Here we show quantum error correction in a heterogeneous, solid-state spin system. We demonstrate that joint initialization, projective readout and fast local and non-local gate operations can all be achieved in diamond spin systems, even under ambient conditions. High-fidelity initialization of a whole spin register (99 per cent) and single-shot readout of multiple individual nuclear spins are achieved by using the ancillary electron spin of a nitrogen-vacancy defect. Implementation of a novel non-local gate generic to our electron-nuclear quantum register allows the preparation of entangled states of three nuclear spins, with fidelities exceeding 85 per cent. With these techniques, we demonstrate three-qubit phase-flip error correction. Using optimal control, all of the above operations achieve fidelities approaching those needed for fault-tolerant quantum operation, thus paving the way to large-scale quantum computation. Besides their use with diamond spin systems, our techniques can be used to improve scaling of quantum networks relying on phosphorus in silicon, quantum dots, silicon carbide or rare-earth ions in solids.
Improving transcriptome assembly through error correction of high-throughput sequence reads.
Macmanes, Matthew D; Eisen, Michael B
2013-01-01
The study of functional genomics, particularly in non-model organisms, has been dramatically improved over the last few years by the use of transcriptomes and RNAseq. While these studies are potentially extremely powerful, a computationally intensive procedure, the de novo construction of a reference transcriptome must be completed as a prerequisite to further analyses. The accurate reference is critically important as all downstream steps, including estimating transcript abundance are critically dependent on the construction of an accurate reference. Though a substantial amount of research has been done on assembly, only recently have the pre-assembly procedures been studied in detail. Specifically, several stand-alone error correction modules have been reported on and, while they have shown to be effective in reducing errors at the level of sequencing reads, how error correction impacts assembly accuracy is largely unknown. Here, we show via use of a simulated and empiric dataset, that applying error correction to sequencing reads has significant positive effects on assembly accuracy, and should be applied to all datasets. A complete collection of commands which will allow for the production of Reptile corrected reads is available at https://github.com/macmanes/error_correction/tree/master/scripts and as File S1.
Santaguida, Stefano; Vernieri, Claudio; Villa, Fabrizio; Ciliberto, Andrea; Musacchio, Andrea
2011-04-20
Fidelity of chromosome segregation is ensured by a tension-dependent error correction system that prevents stabilization of incorrect chromosome-microtubule attachments. Unattached or incorrectly attached chromosomes also activate the spindle assembly checkpoint, thus delaying mitotic exit until all chromosomes are bioriented. The Aurora B kinase is widely recognized as a component of error correction. Conversely, its role in the checkpoint is controversial. Here, we report an analysis of the role of Aurora B in the spindle checkpoint under conditions believed to uncouple the effects of Aurora B inhibition on the checkpoint from those on error correction. Partial inhibition of several checkpoint and kinetochore components, including Mps1 and Ndc80, strongly synergizes with inhibition of Aurora B activity and dramatically affects the ability of cells to arrest in mitosis in the presence of spindle poisons. Thus, Aurora B might contribute to spindle checkpoint signalling independently of error correction. Our results support a model in which Aurora B is at the apex of a signalling pyramid whose sensory apparatus promotes the concomitant activation of error correction and checkpoint signalling pathways.
Bulk Locality and Quantum Error Correction in AdS/CFT
Almheiri, Ahmed; Harlow, Daniel
2014-01-01
We point out a connection between the emergence of bulk locality in AdS/CFT and the theory of quantum error correction. Bulk notions such as Bogoliubov transformations, location in the radial direction, and the holographic entropy bound all have natural CFT interpretations in the language of quantum error correction. We also show that the question of whether bulk operator reconstruction works only in the causal wedge or all the way to the extremal surface is related to the question of whether or not the quantum error correcting code realized by AdS/CFT is also a "quantum secret sharing scheme", and suggest a tensor network calculation that may settle the issue. Interestingly, the version of quantum error correction which is best suited to our analysis is the somewhat nonstandard "operator algebra quantum error correction" of Beny, Kempf, and Kribs. Our proposal gives a precise formulation of the idea of "subregion-subregion" duality in AdS/CFT, and clarifies the limits of its validity.
Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.
2009-01-01
SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.
范晋伟; 王晓峰; 李云
2013-01-01
How to improve the machining accuracy of multi-axis CNC machine tools with least investment is the hot spot of todays social research. In order to solve the problem,modifying the instructions of NC machine to improve the accuracy is the most effective method. The multi-axis CNC machine tool was analyzed with multi-body theory. The general error model of multi-axis CNC machine tools was established. Based on these,a thorough study on rotary error was made. Taking C-A type machine tool as example,the in-terative solution method was used to get the precise NC instructions of rotary angle.% 机床已经成为工业领域不可缺少的加工工具，特别是多轴机床的出现改变了传统的加工方式。针对如何提高多轴数控机床的精度展开研究，应用多体理论建立了通用多轴数控机床误差模型，在此基础上，给出了多轴数控机床平移和转动数控精密指令求解方法。并以C-A型五轴数控机床为例，采用迭代的方法求出了回转角的精密指令值。
Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data
Jinhua Han
2017-01-01
Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.
Maclean, E. H.; Tomás, R.; Giovannozzi, M.; Persson, T. H. B.
2015-12-01
Nonlinear magnetic errors in low-β insertions can contribute significantly to detuning with amplitude, linear and nonlinear chromaticity, and lead to degradation of dynamic aperture and beam lifetime. As such, the correction of nonlinear errors in the experimental insertions of colliders can be of critical significance for successful operation. This is expected to be of particular relevance to the LHC's second run and its high luminosity upgrade, as well as to future colliders such as the Future Circular Collider. Current correction strategies envisioned for these colliders assume it will be possible to calculate optimized local corrections through the insertions, using a magnetic model of the errors. This paper shows however, that reliance purely upon magnetic measurements of the nonlinear errors of insertion elements is insufficient to guarantee a good correction quality in the relevant low-β* regime. It is possible to perform beam-based examination of nonlinear magnetic errors via the feed-down to readily observed beam properties upon application of closed orbit bumps, and methods based upon feed-down to tune have been utilized at RHIC, SIS18, and SPS. This paper demonstrates the extension of such methodology to include direct observation of feed-down to linear coupling in the LHC. It is further shown that such beam-based studies can be used to complement magnetic measurements performed during LHC construction, in order to validate and refine the magnetic model of the collider. Results from first attempts of the measurement and correction of nonlinear errors in the LHC experimental insertions are presented. Several discrepancies of beam-based studies with respect to the LHC magnetic model are reported.
熊瑶; 孙开键
2016-01-01
Peer assessment is one of the most important assessment methods in Massive Open Online Courses (MOOCs), especially for open-ended assignments or projects. However, for the purpose of summative evaluation, peer assessment results are generally not trusted. This is because peer raters, who are novices, would produce more random errors and systematic biases in ratings than would expert raters, due to peer raters’lack of content expertise and rating experience. In this paper, two major approaches that are designed to improve the accuracy of peer assessment results are reviewed and compared. The first approach is designed to calibrate accuracy of individual peer raters before actual peer assessments so that differential weights can be assigned to raters based on accuracy. The second approach is designed to remedy peer rating errors post hoc. Differences in assumptions, parameterization and estimation methods, and implementation issues are discussed. The development of methods to improve MOOCs peer assessment results is still in its infancy. Most of the methods reviewed in this paper have yet to be implemented and evaluated in real-life applications. We hope the discussion and comparison of different methods in this paper will provide some theoretical and methodological background for further research into MOOC peer assessment.%学生互评是广泛用于慕课的一种评价方法，然而学生评估者本身存在比较大的评分误差。本文着重介绍和比较可用于纠正慕课学生互评误差的方法。这些方法总体分为两大类，即对学生评估者进行前期纠正的方法和对评分结果进行后期纠正的方法。文中总结的绝大部分方法目前都还没有被实际运用在慕课学生互评中。希望通过本文对慕课学生互评以及纠正学生评分误差方法的介绍，可以让更多的教育研究者参与对慕课的评价系统进行改善的研究。
Truong, Trong-Kha; Guidon, Arnaud
2014-01-01
Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457
Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes
Costello, D. J., Jr.; Deng, H.; Lin, S.
1984-01-01
A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.
Asassfeh, Sahail M.
2013-01-01
Corrective feedback (CF), the implicit or explicit information learners receive indicating a gap between their current, compared to the desired, performance, has been an area of interest for EFL researchers during the last few decades. This study, conducted on 139 English-major prospective EFL teachers, assessed the impact of two CF types…
Khalifa MA
2012-12-01
Full Text Available Mounir A Khalifa,1,2 Waleed A Allam,1,2 Mohamed S Shaheen2,31Ophthalmology Department, Tanta University Eye Hospital, Tanta, Egypt; 2Horus Vision Correction Center, Alexandria, Egypt; 3Ophthalmology Department, Alexandria University, Alexandria, EgyptPurpose: To investigate the efficacy and predictability of wavefront-guided laser in situ keratomileusis (LASIK treatments using the iris registration (IR technology for the correction of refractive errors in patients with large pupils.Setting: Horus Vision Correction Center, Alexandria, Egypt.Methods: Prospective noncomparative study including a total of 52 eyes of 30 consecutive laser refractive correction candidates with large mesopic pupil diameters and myopia or myopic astigmatism. Wavefront-guided LASIK was performed in all cases using the VISX STAR S4 IR excimer laser platform. Visual, refractive, aberrometric and mesopic contrast sensitivity (CS outcomes were evaluated during a 6-month follow-up.Results: Mean mesopic pupil diameter ranged from 8.0 mm to 9.4 mm. A significant improvement in uncorrected distance visual acuity (UCDVA (P < 0.01 was found postoperatively, which was consistent with a significant refractive correction (P < 0.01. No significant change was detected in corrected distance visual acuity (CDVA (P = 0.11. Efficacy index (the ratio of postoperative UCDVA to preoperative CDVA and safety index (the ratio of postoperative CDVA to preoperative CDVA were calculated. Mean efficacy and safety indices were 1.06 ± 0.33 and 1.05 ± 0.18, respectively, and 92.31% of eyes had a postoperative spherical equivalent within ±0.50 diopters (D. Manifest refractive spherical equivalent improved significantly (P < 0.05 from a preoperative level of −3.1 ± 1.6 D (range −6.6 to 0 D to −0.1 ± 0.2 D (range −1.3 to 0.1 D at 6 months postoperative. No significant changes were found in mesopic CS (P ≥ 0.08, except CS for three cycles/degree, which improved significantly (P = 0
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
Streaming Media over a Color Overlay Based on Forward Error Correction Technique
张晓瑜; 沈国斌; 李世鹏; 钟玉琢
2004-01-01
The number of clients that receive high-quality streaming video from a source is greatly limited by the application requirements,such as the high bandwidth and reliability.In this work,a method was developed to construct a color overlay,which enables clients to receive data across multiple paths,based on the forward error correction technique.The color overlay enlarges system capacity by reducing the bottlenecks and extending the bandwidth,improves reliability against node failure,and is more resilient to fluctuations of network metrics.A light-weight protocol for building the overlay is also presented.Extensive simulations were conducted and the results clearly support the claimed advantages.
Automated general temperature correction method for dielectric soil moisture sensors
Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao
2017-08-01
An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a
Is a genome a codeword of an error-correcting code?
Luzinete C B Faria
Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Ashraf, Bilal; Janss, Luc; Jensen, Just
Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...
Experimental demonstration of a graph state quantum error-correction code.
Bell, B A; Herrera-Martí, D A; Tame, M S; Markham, D; Wadsworth, W J; Rarity, J G
2014-04-22
Scalable quantum computing and communication requires the protection of quantum information from the detrimental effects of decoherence and noise. Previous work tackling this problem has relied on the original circuit model for quantum computing. However, recently a family of entangled resources known as graph states has emerged as a versatile alternative for protecting quantum information. Depending on the graph's structure, errors can be detected and corrected in an efficient way using measurement-based techniques. Here we report an experimental demonstration of error correction using a graph state code. We use an all-optical setup to encode quantum information into photons representing a four-qubit graph state. We are able to reliably detect errors and correct against qubit loss. The graph we realize is setup independent, thus it could be employed in other physical settings. Our results show that graph state codes are a promising approach for achieving scalable quantum information processing.
Full-Diversity Space-Time Error Correcting Codes with Low-Complexity Receivers
Hassan MohamadSayed
2011-01-01
Full Text Available We propose an explicit construction of full-diversity space-time block codes, under the constraint of an error correction capability. Furthermore, these codes are constructed in order to be suitable for a serial concatenation with an outer linear forward error correcting (FEC code. We apply the binary rank criterion, and we use the threaded layering technique and an inner linear FEC code to define a space-time error-correcting code. When serially concatenated with an outer linear FEC code, a product code can be built at the receiver, and adapted iterative receiver structures can be applied. An optimized hybrid structure mixing MMSE turbo equalization and turbo product code decoding is proposed. It yields reduced complexity and enhanced performance compared to previous existing structures.
Hossein Nassaji
2011-10-01
Full Text Available A substantial number of studies have examined the effects of grammar correction on second language (L2 written errors. However, most of the existing research has involved unidirectional written feedback. This classroom-based study examined the effects of oral negotiation in addressing L2 written errors. Data were collected in two intermediate adult English as a second language classes. Three types of feedback were compared: nonnegotiated direct reformulation, feedback with limited negotiation (i.e., prompt + reformulation and feedback with negotiation. The linguistic targets chosen were the two most common grammatical errors in English: articles and prepositions. The effects of feedback were measured by means of learner-specific error identification/correction tasks administered three days, and again ten days, after the treatment. The results showed an overall advantage for feedback that involved negotiation. However, a comparison of data per error types showed that the differential effects of feedback types were mainly apparent for article errors rather than preposition errors. These results suggest that while negotiated feedback may play an important role in addressing L2 written errors, the degree of its effects may differ for different linguistic targets.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Errors in Thermographic Camera Measurement Caused by Known Heat Sources and Depth Based Correction
Mark Christian E. Manuel
2016-03-01
Full Text Available Thermal imaging has shown to be a better tool for the quantitative measurement of temperature than single spot infrared thermometers. However, thermographic cameras can encounter errors in acquiring accurate temperature measurements in the presence of other environmental heat sources. Some of these errors arise due to the inability of the thermal camera to detect objects and features in the infrared domain. In this paper, the thermal image is registered as a stereo image from a Kinect system prior to depth-based correction. Experiments demonstrating the error are presented together with the determination of the measurement errors under prior knowledge of the thermographed scene. The proposed correction scheme improves the accuracy of the thermal image through augmentation using the Kinect system.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Kevin Bradley Clark
2013-08-01
Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in
Extending the lifetime of a quantum bit with error correction in superconducting circuits
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.
2016-08-01
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Extending the lifetime of a quantum bit with error correction in superconducting circuits.
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S M; Jiang, L; Mirrahimi, Mazyar; Devoret, M H; Schoelkopf, R J
2016-08-25
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The 'break-even' point of QEC--at which the lifetime of a qubit exceeds the lifetime of the constituents of the system--has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0〉f and |1〉f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Practice and Effect of Appropriate Error-correction in English Teaching
REN Jing-ming; HU Rong
2002-01-01
This paper points out that with interference from their native language and culture, Chinese students will inevitably make some errors in the process of learning English. It is important for teachers to know when and how to correct the students' errors. By employing errorcorrection skillfully and appropriately, one can expect to improve the present English teaching and learning, to develop the self-confidence and self-esteem in students themselves.
Post-error Correction in Automatic Speech Recognition Using Discourse Information
Kang,S.; Kim, J. -H.; Seo, J.
2014-01-01
Overcoming speech recognition errors in the field of human�computer interaction is important in ensuring a consistent user experience. This paper proposes a semantic-oriented post-processing approach for the correction of errors in speech recognition. The novelty of the model proposed here is that it re-ranks the n-best hypothesis of speech recognition based on the user's intention, which is analyzed from previous discourse information, while conventional automatic speech reco...
Orbit error correction on the high energy beam transport line at the KHIMA accelerator system
Park, Chawon; Yim, Heejoong; Hahn, Garam; An, Dong Hyun
2016-09-01
For the purpose of treatment of various cancers and medical research, a synchrotron based medical machine has been developed under the Korea Heavy Ion Medical Accelerator (KHIMA) project and is scheduled for use to treat patient at the beginning of 2018. The KHIMA synchrotron is designed to accelerate and extract carbon ion (proton) beams with various energies from 110 to 430 MeV/u (60 to 230 MeV). Studies on the lattice design and beam optics for the High Energy Beam Transport (HEBT) line at the KHIMA accelerator system have been carried out using the WinAgile and the MAD-X codes. Because magnetic field errors and misalignments introduce deviations from the design parameters, these error sources should be treated explicitly, and the sensitivity of the machine's lattice to different individual error sources should be considered. Various types of errors, both static and dynamic, have been taken into account and have been consequentially corrected with a dedicated correction algorithm by using the MAD-X program. Based on the error analysis, the optimized correction setup is decided, and the specifications for the correcting magnets of the HEBT lines are determined.
A Phillips curve interpretation of error-correction models of the wage and price dynamics
Harck, Søren H.
This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...
A Phillips curve interpretation of error-correction models of the wage and price dynamics
Harck, Søren H.
2009-01-01
This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...
Demonstration of a quantum error correction for enhanced sensitivity of photonic measurements
Cohen, L.; Pilnyak, Y.; Istrati, D.; Retzker, A.; Eisenberg, H. S.
2016-07-01
The sensitivity of classical and quantum sensing is impaired in a noisy environment. Thus, one of the main challenges facing sensing protocols is to reduce the noise while preserving the signal. State-of-the-art quantum sensing protocols that rely on dynamical decoupling achieve this goal under the restriction of long noise correlation times. We implement a proof-of-principle experiment of a protocol to recover sensitivity by using an error correction for photonic systems that does not have this restriction. The protocol uses a protected entangled qubit to correct a single error. Our results show a recovery of about 87 % of the sensitivity, independent of the noise probability.
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John
2015-01-01
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be ...
LINEARIZATION AND CORRECTION METHOD FOR NONLINEAR PROBLEMS
何吉欢
2002-01-01
A new perturbation-like technique called linearization and correction method is proposed. Contrary to the traditional perturbation techniques, the present theory does not assume that the solution is expressed in the form of a power series of small parameter. To obtain an asymptotic solution of nonlinear system, the technique first searched for a solution for the linearized system, then a correction was added to the linearized solution. So the obtained results are uniformly valid for both weakly and strongly nonlinear equations.
Parsing error correction of medical phrases for semantic annotation of clinical radiology reports.
Nishimoto, Naoki; Terae, Satoshi; Uesugi, Masahito; Tanikawa, Takumi; Endou, Akira; Endoh, Akira; Ogasawara, Katsuhiko; Sakurai, Tsunetaro
2008-11-06
The purpose of this study is to develop a module for correcting errors in the product of a natural language parser. When tested with 300 CT reports, a total of 604 patterns were generated. The recall and precision was improved to 90.7% and 74.1% after processed by the module from initial 80.5% and 42.8% respectively. This rule-based module will help health care personnel reduce the cost of manual tagging correction for corpus building.
Study of on-machine error identification and compensation methods for micro machine tools
Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng
2016-08-01
Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results
Qiu, Weiliang; Rosner, Bernard
2010-01-01
The use of the cumulative average model to investigate the association between disease incidence and repeated measurements of exposures in medical follow-up studies can be dated back to the 1960s (Kahn and Dawber, J Chron Dis 19:611-620, 1966). This model takes advantage of all prior data and thus should provide a statistically more powerful test of disease-exposure associations. Measurement error in covariates is common for medical follow-up studies. Many methods have been proposed to correct for measurement error. To the best of our knowledge, no methods have been proposed yet to correct for measurement error in the cumulative average model. In this article, we propose a regression calibration approach to correct relative risk estimates for measurement error. The approach is illustrated with data from the Nurses' Health Study relating incident breast cancer between 1980 and 2002 to time-dependent measures of calorie-adjusted saturated fat intake, controlling for total caloric intake, alcohol intake, and baseline age.
Ion beam machining error control and correction for small scale optics.
Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi
2011-09-20
Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.
Ryu, D.; Crow, W. T.
2011-12-01
Streamflow forecasting in the poorly gauged or ungauged catchments is very difficult mainly due to the absence of the input forcing data for forecasting models. This challenge poses a threat to human safety and industry in the areas where proper warning system is not provided. Currently, a number of studies are in progress to calibrate streamflow models without relying on ground observations as an effort to construct a streamflow forecasting systems in the ungauged catchments. Also, recent advances in satellite altimetry and innovative application of the optical has enabled mapping streamflow rate and flood extent in the remote areas. In addition, remotely sensed hydrological variables such as the real-time satellite precipitation data, microwave soil moisture retrievals, and surface thermal infrared observations have the great potential to be used as a direct input or signature information to run the forecasting models. In this work, we evaluate a real-time satellite precipitation product, TRMM 3B42RT, and correct errors of the product using the microwave satellite soil moisture products over 240 catchments in Australia. The error correction is made by analyzing the difference between output soil moisture of a simple model forced by the TRMM product and the satellite retrievals of soil moisture. The real-time satellite precipitation products before and after the error correction are compared with the daily gauge-interpolated precipitation data produced by the Australian Bureau of Meteorology. The error correction improves overall accuracy of the catchment-scale satellite precipitation, especially the root mean squared error (RMSE), correlation, and the false alarm ratio (FAR), however, only a marginal improvement is observed in the probability of detection (POD). It is shown that the efficiency of the error correction is affected by the surface vegetation density and the annual precipitation of the catchments.
Liu, Yongchao; Schmidt, Bertil; Maskell, Douglas L
2011-03-29
Next-generation sequencing technologies have led to the high-throughput production of sequence data (reads) at low cost. However, these reads are significantly shorter and more error-prone than conventional Sanger shotgun reads. This poses a challenge for the de novo assembly in terms of assembly quality and scalability for large-scale short read datasets. We present DecGPU, the first parallel and distributed error correction algorithm for high-throughput short reads (HTSRs) using a hybrid combination of CUDA and MPI parallel programming models. DecGPU provides CPU-based and GPU-based versions, where the CPU-based version employs coarse-grained and fine-grained parallelism using the MPI and OpenMP parallel programming models, and the GPU-based version takes advantage of the CUDA and MPI parallel programming models and employs a hybrid CPU+GPU computing model to maximize the performance by overlapping the CPU and GPU computation. The distributed feature of our algorithm makes it feasible and flexible for the error correction of large-scale HTSR datasets. Using simulated and real datasets, our algorithm demonstrates superior performance, in terms of error correction quality and execution speed, to the existing error correction algorithms. Furthermore, when combined with Velvet and ABySS, the resulting DecGPU-Velvet and DecGPU-ABySS assemblers demonstrate the potential of our algorithm to improve de novo assembly quality for de-Bruijn-graph-based assemblers. DecGPU is publicly available open-source software, written in CUDA C++ and MPI. The experimental results suggest that DecGPU is an effective and feasible error correction algorithm to tackle the flood of short reads produced by next-generation sequencing technologies.
Schmidt Bertil
2011-03-01
Full Text Available Abstract Background Next-generation sequencing technologies have led to the high-throughput production of sequence data (reads at low cost. However, these reads are significantly shorter and more error-prone than conventional Sanger shotgun reads. This poses a challenge for the de novo assembly in terms of assembly quality and scalability for large-scale short read datasets. Results We present DecGPU, the first parallel and distributed error correction algorithm for high-throughput short reads (HTSRs using a hybrid combination of CUDA and MPI parallel programming models. DecGPU provides CPU-based and GPU-based versions, where the CPU-based version employs coarse-grained and fine-grained parallelism using the MPI and OpenMP parallel programming models, and the GPU-based version takes advantage of the CUDA and MPI parallel programming models and employs a hybrid CPU+GPU computing model to maximize the performance by overlapping the CPU and GPU computation. The distributed feature of our algorithm makes it feasible and flexible for the error correction of large-scale HTSR datasets. Using simulated and real datasets, our algorithm demonstrates superior performance, in terms of error correction quality and execution speed, to the existing error correction algorithms. Furthermore, when combined with Velvet and ABySS, the resulting DecGPU-Velvet and DecGPU-ABySS assemblers demonstrate the potential of our algorithm to improve de novo assembly quality for de-Bruijn-graph-based assemblers. Conclusions DecGPU is publicly available open-source software, written in CUDA C++ and MPI. The experimental results suggest that DecGPU is an effective and feasible error correction algorithm to tackle the flood of short reads produced by next-generation sequencing technologies.
Huiliang Wang
2015-01-01
Full Text Available To increase quality, reduce heavy-duty gear noise, and avoid edge contact in manufacturing helical gears, a closed-loop feedback correction method in topographic modification tooth flank is proposed based on the gear form grinding. Equations of grinding wheel profile and grinding wheel additional radial motion are derived according to tooth segmented profile modification and longitudinal modification. Combined with gear form grinding kinematics principles, the equations of motion for each axis of five-axis computer numerical control forming grinding machine are established. Such topographical modification is achieved in gear form grinding with on-machine measurement. Based on a sensitivity analysis of polynomial coefficients of axis motion and the topographic flank errors by on-machine measuring, the corrections are determined through an optimization process that targets minimization of the tooth flank errors. A numerical example of gear grinding, including on-machine measurement and closed-loop feedback correction completing process, is presented. The validity of this flank correction method is demonstrated for tooth flank errors that are reduced. The approach is useful to precision manufacturing of spiral bevel and hypoid gears, too.