BEAM-BASED NON-LINEAR OPTICS CORRECTIONS IN COLLIDERS
International Nuclear Information System (INIS)
PILAT, R.; LUO, Y.; MALITSKY, N.; PTITSYN, V.
2005-01-01
A method has been developed to measure and correct operationally the non-linear effects of the final focusing magnets in colliders, that gives access to the effects of multi-pole errors by applying closed orbit bumps, and analyzing the resulting tune and orbit shifts. This technique has been tested and used during 4 years of RHIC (the Relativistic Heavy Ion Collider at BNL) operations. I will discuss here the theoretical basis of the method, the experimental set-up, the correction results, the present understanding of the machine model, the potential and limitations of the method itself as compared with other non-linear correction techniques
BEAM-BASED NON-LINEAR OPTICS CORRECTIONS IN COLLIDERS.
Energy Technology Data Exchange (ETDEWEB)
PILAT, R.; LUO, Y.; MALITSKY, N.; PTITSYN, V.
2005-05-16
A method has been developed to measure and correct operationally the non-linear effects of the final focusing magnets in colliders, that gives access to the effects of multi-pole errors by applying closed orbit bumps, and analyzing the resulting tune and orbit shifts. This technique has been tested and used during 4 years of RHIC (the Relativistic Heavy Ion Collider at BNL) operations. I will discuss here the theoretical basis of the method, the experimental set-up, the correction results, the present understanding of the machine model, the potential and limitations of the method itself as compared with other non-linear correction techniques.
Non linear field correction effects on the dynamic aperture of the FCC-hh
AUTHOR|(INSPIRE)INSPIRE-00361058; Seryi, Andrei; Maclean, Ewen Hamish; Martin, Roman; Tomas Garcia, Rogelio
2017-01-01
The Future Circular Collider (FCC) design study aims to develop the designs of possible circular colliders in the post LHC era. In particular the FCC-hh will aim to produce proton-proton collisions at a center of mass energy of 100 TeV. Given the large beta functions and integrated length of the quadrupoles of the final focus triplet the effect of systematic and random non linear errors in the magnets are expected to have a severe impact on the stability of the beam. Following the experience on the HL-LHC this work explores the implementation of non-linear correctors to minimize the resonance driving terms arising from the errors of the triplet. Dynamic aperture studies are then performed to study the impact of this correction.
International Nuclear Information System (INIS)
Monteiro, P.R.B.
1983-01-01
It is shown that an analog to digital converter of sucessive approximation type may be used in the nuclear spectroscopy work provided its differential non linearity is suitably corrected. The function of an analog to digital converter in a nuclear data acquisition system is described. The main parameters which characterize this function have also been defined. A comparative study of the two types of A/D converters, Wilkinson type and the sucessive approximation type, has been carried out. Its is concluded that the later type of the converter is more convenient when it has been corrected for its differential non linearity. THe source of error of the differential non linearity is both qualitatively and quantitatively analysed and the design and construction of a corrector circuit is described which uses the sliding scale method. The experimental results show that the differential non linearity error is reduced to less than 1%. (Author) [pt
A new method for non-linear distortion correction of Chinese vehicle license plate
Zhao, Qirui; Hu, Zhenyu; Xu, Wei; Zhang, Maojun
2017-07-01
Since the image acquisition is interfered by various factors, and the image of the license plate is often distorted, the distortion correction of the license plate image is of great significance to improve the accuracy of the license plate recognition. In this paper, we propose a new method to segment distorted images for linear approximation. The main idea is to segment the non-linear distorted license plate images into linear parts according to the license plate character, searching for the corners of each part, then reconstructing the non-linear distortion of the license plate image by using the perspective projection model, finally realizing the license plate image correction. Experimental results show that the proposed method is effective for the correction of non-linear distortion plate images.
Basic method for reduction of error in ordinary approximations of the non-linear functions
International Nuclear Information System (INIS)
Amanullah
2006-01-01
In this research article certain conditions of the infinite ordered nonlinear function with some terms have been determined and defined. Ordinary method of approximation has been analyzed with an example. It has been shown that to decrease nonlinear error order of approximation needs to be increased. Consequently non-linearity of higher order involves in the differential equations and hence the problem of integration arises again. It has been proposed to fix the order of approximation and decrease the error. For the improvement of an ordinary approximation the basic principle underlying its improvement has been discussed. For the reduction of error of an ordinary approximation a general expression has been given in which the nonlinear error has been distributed uniformly. In the given example error of initial ordinary approximation has been decreased more than 63 % . (author)
Error Correcting Codes -34 ...
Indian Academy of Sciences (India)
Science, Bangalore. Her interests are in. Theoretical Computer. Science. SERIES I ARTICLE. Error Correcting Codes. 2. The Hamming Codes. Priti Shankar. In the first article of this series we showed how redundancy introduced into a message transmitted over a noisy channel could improve the reliability of transmission. In.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March 1997 pp 33-47. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/03/0033-0047 ...
Indian Academy of Sciences (India)
focused pictures of Triton, Neptune's largest moon. This great feat was in no small measure due to the fact that the sophisticated communication system on Voyager had an elaborate error correcting scheme built into it. At Jupiter and Saturn, a convolutional code was used to enhance the reliability of transmission, and at ...
Indian Academy of Sciences (India)
It was engineering on the grand scale. - the use of new material for .... ROAD REPAIRSCE!STOP}!TL.,ZBFALK where errors occur in both the message as well as the check symbols, the decoder would be able to correct all of these (as there are not more than 8 .... before it is conveyed to the master disc. Modulation caters for.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
Correction of non-linear thickness effects in HAADF STEM electron tomography
Energy Technology Data Exchange (ETDEWEB)
Van den Broek, W., E-mail: wouter.vandenbroek@uni-ulm.de [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Rosenauer, A. [Institut fuer Festkoerperphysik (IFP), Universitaet Bremen, Otto-Hahn-Allee 1, 28359 Bremen (Germany); Goris, B.; Martinez, G.T.; Bals, S.; Van Aert, S.; Van Dyck, D. [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2012-05-15
In materials science, high angle annular dark field scanning transmission electron microscopy is often used for tomography at the nanometer scale. In this work, it is shown that a thickness dependent, non-linear damping of the recorded intensities occurs. This results in an underestimated intensity in the interior of reconstructions of homogeneous particles, which is known as the cupping artifact. In this paper, this non-linear effect is demonstrated in experimental images taken under common conditions and is reproduced with a numerical simulation. Furthermore, an analytical derivation shows that these non-linearities can be inverted if the imaging is done quantitatively, thus preventing cupping in the reconstruction. -- Highlights: Black-Right-Pointing-Pointer In HAADF STEM, a thickness dependent, non-linear damping of the projected intensities occurs. Black-Right-Pointing-Pointer In tomography, this leads to underestimated intensities in the interior of homogeneous particles, the cupping artifact. Black-Right-Pointing-Pointer The non-linear damping is demonstrated in experimental images and reproduced with numerical simulations. Black-Right-Pointing-Pointer The non-linear damping can be undone if the imaging is done quantitatively. Black-Right-Pointing-Pointer Experimental proof is provided showing that cupping can be prevented.
International Nuclear Information System (INIS)
Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y Y
2008-01-01
We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency
Hamann, Jan; Hannestad, Steen; Melchiorri, Alessandro; Wong, Yvonne Y. Y.
2008-07-01
We explore and compare the performances of two non-linear correction and scale-dependent biasing models for the extraction of cosmological information from galaxy power spectrum data, especially in the context of beyond-ΛCDM (CDM: cold dark matter) cosmologies. The first model is the well known Q model, first applied in the analysis of Two-degree Field Galaxy Redshift Survey data. The second, the P model, is inspired by the halo model, in which non-linear evolution and scale-dependent biasing are encapsulated in a single non-Poisson shot noise term. We find that while the two models perform equally well in providing adequate correction for a range of galaxy clustering data in standard ΛCDM cosmology and in extensions with massive neutrinos, the Q model can give unphysical results in cosmologies containing a subdominant free-streaming dark matter whose temperature depends on the particle mass, e.g., relic thermal axions, unless a suitable prior is imposed on the correction parameter. This last case also exposes the danger of analytic marginalization, a technique sometimes used in the marginalization of nuisance parameters. In contrast, the P model suffers no undesirable effects, and is the recommended non-linear correction model also because of its physical transparency.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing for linea...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis......In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Correction for quadrature errors
DEFF Research Database (Denmark)
Netterstrøm, A.; Christensen, Erik Lintz
1994-01-01
In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals......, which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...
Video Error Correction Using Steganography
Directory of Open Access Journals (Sweden)
Robie David L
2002-01-01
Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Video Error Correction Using Steganography
Robie, David L.; Mersereau, Russell M.
2002-12-01
The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.
Indian Academy of Sciences (India)
sound quality is, in essence, obtained by accurate waveform coding and decoding of the audio signals. In addition, the coded audio information is protected against disc errors by the use of a Cross Interleaved Reed-Solomon Code (CIRC). Reed-. Solomon codes were discovered by Irving Reed and Gus Solomon in 1960.
de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente
2018-04-01
Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.
Wormholes in higher dimensions with non-linear curvature terms from quantum gravity corrections
Energy Technology Data Exchange (ETDEWEB)
El-Nabulsi, Ahmad Rami [Neijiang Normal University, Neijiang, Sichuan (China)
2011-11-15
In this work, we discuss a 7-dimensional universe in the presence of a static traversable wormhole and a decaying cosmological constant and dominated by higher-order curvature effects expected from quantum gravity corrections. We confirmed the existence of wormhole solutions in the form of the Lovelock gravity. Many interesting and attractive features are discussed in some detail.
Linear network error correction coding
Guang, Xuan
2014-01-01
There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an
Quantum error correction for beginners
International Nuclear Information System (INIS)
Devitt, Simon J; Nemoto, Kae; Munro, William J
2013-01-01
Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)
Scott, M
2012-08-01
The time-covariance function captures the dynamics of biochemical fluctuations and contains important information about the underlying kinetic rate parameters. Intrinsic fluctuations in biochemical reaction networks are typically modelled using a master equation formalism. In general, the equation cannot be solved exactly and approximation methods are required. For small fluctuations close to equilibrium, a linearisation of the dynamics provides a very good description of the relaxation of the time-covariance function. As the number of molecules in the system decrease, deviations from the linear theory appear. Carrying out a systematic perturbation expansion of the master equation to capture these effects results in formidable algebra; however, symbolic mathematics packages considerably expedite the computation. The authors demonstrate that non-linear effects can reveal features of the underlying dynamics, such as reaction stoichiometry, not available in linearised theory. Furthermore, in models that exhibit noise-induced oscillations, non-linear corrections result in a shift in the base frequency along with the appearance of a secondary harmonic.
DEFF Research Database (Denmark)
Martinez Peñas, Umberto; Pellikaan, Ruud
2017-01-01
Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...
Opportunistic Error Correction for MIMO
Shao, X.; Slump, Cornelis H.
In this paper, we propose an energy-efficient scheme to reduce the power consumption of ADCs in MIMO-OFDM systems. The proposed opportunistic error correction scheme is based on resolution adaptive ADCs and fountain codes. The key idea is to transmit a fountain-encoded packet over one single
Error correcting coding for OTN
DEFF Research Database (Denmark)
Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.
2010-01-01
Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...
International Nuclear Information System (INIS)
Kumar, K. Vasanth; Porkodi, K.; Rocha, F.
2008-01-01
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm
Nonconvex Compressed Sensing and Error Correction
National Research Council Canada - National Science Library
Chartrand, Rick
2007-01-01
.... In this paper we consider a nonconvex extension. In the context of sparse error correction, we perform numerical experiments that show that for a fixed number of measurements, errors of larger support can be corrected in the nonconvex case...
Directory of Open Access Journals (Sweden)
Muhammad Wazir
2012-01-01
Full Text Available The purpose of this work is to study dose non-linearity in medical linear accelerators used in conventional radiotherapy and intensity-modulated radiation therapy. Open fields, as well as the enhanced dynamic wedge ones, were used to collect data for 6 MV and 15 MV photon beams obtained from the VARIAN linear accelerator. Beam stability was checked and confirmed for different dose rates, energies, and application of enhanced dynamic wedge by calculating the charge per monitor unit. Monitor unit error was calculated by the two-exposure method for open and enhanced dynamic wedge beams of 6 MV and 15 MV photons. A significant monitor unit error with maximum values of ±2.05931 monitor unit and ±2.44787 monitor unit for open and enhanced dynamic wedge beams, respectively, both energy and dose rate dependent, was observed both in the open photon beam and enhanced dynamic wedge fields. However, it exhibited certain irregular patterns at enhanced dynamic wedge angles. Dose monitor unit error exists only because of the overshoot phenomena and electronic delay in dose coincident and integrated circuits with a dependency on the dose rate and photon energy. Monitor unit errors are independent of the application of enhanced dynamic wedge. The existence of monitor unit error demands that the dose non-linearity of the linear accelerator dosimetry system be periodically tested, so as to avoid significant dosimetric errors.
Correction of errors in power measurements
DEFF Research Database (Denmark)
Pedersen, Knud Ole Helgesen
1998-01-01
Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....
Advanced hardware design for error correcting codes
Coussy, Philippe
2015-01-01
This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.
Using Online Annotations to Support Error Correction and Corrective Feedback
Yeh, Shiou-Wen; Lo, Jia-Jiunn
2009-01-01
Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…
Immediate error correction process following sleep deprivation.
Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling
2007-06-01
Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation.
Passive quantum error correction with linear optics
International Nuclear Information System (INIS)
Barbosa de Brito, Daniel; Viana Ramos, Rubens
2006-01-01
Recently it was proposed by Kalamidas in [D. Kalamidas, Phys. Lett. A 343 (2005) 331] an optical set-up able to correct single qubit errors using Pockels cells. In this work, we present a different set-up able to realize error correction passively, in the sense that none external action is needed
A Hybrid Approach for Correcting Grammatical Errors
Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…
Notions of "Error" and Appropriate Corrective Treatment.
Lee, Nancy
1990-01-01
The relationship between the notion of "error" in linguistics and language teaching theory and its potential application to error correction in the second language classroom is examined. Definitions of "error" in psycholinguistics, native speech, and English second language instruction are discussed, and the relationship of interlanguage…
Bedia, Manuel G.; Di Paolo, Ezequiel
2012-01-01
Dual-process approaches of decision-making examine the interaction between affective/intuitive and deliberative processes underlying value judgment. From this perspective, decisions are supported by a combination of relatively explicit capabilities for abstract reasoning and relatively implicit evolved domain-general as well as learned domain-specific affective responses. One such approach, the somatic markers hypothesis (SMH), expresses these implicit processes as a system of evolved primary emotions supplemented by associations between affect and experience that accrue over lifetime, or somatic markers. In this view, somatic markers are useful only if their local capability to predict the value of an action is above a baseline equal to the predictive capability of the combined rational and primary emotional subsystems. We argue that decision-making has often been conceived of as a linear process: the effect of decision sequences is additive, local utility is cumulative, and there is no strong environmental feedback. This widespread assumption can have consequences for answering questions regarding the relative weight between the systems and their interaction within a cognitive architecture. We introduce a mathematical formalization of the SMH and study it in situations of dynamic, non-linear decision chains using a discrete-time stochastic model. We find, contrary to expectations, that decision-making events can interact non-additively with the environment in apparently paradoxical ways. We find that in non-lethal situations, primary emotions are represented globally over and above their local weight, showing a tendency for overcautiousness in situated decision chains. We also show that because they tend to counteract this trend, poorly attuned somatic markers that by themselves do not locally enhance decision-making, can still produce an overall positive effect. This result has developmental and evolutionary implications since, by promoting exploratory behavior
Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-01
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Robot learning and error correction
Friedman, L.
1977-01-01
A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.
Directory of Open Access Journals (Sweden)
Vivek Singh Bawa
2017-06-01
Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.
Equation-Method for correcting clipping errors in OFDM signals.
Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry
2016-01-01
Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Statistical mechanics of error-correcting codes
Kabashima, Y.; Saad, D.
1999-01-01
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Chaiwat Tantarangsee
2014-01-01
The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the cours...
Survey of Radar Refraction Error Corrections
2016-11-01
RANGE YUMA PROVING GROUND NAVAL AIR WARFARE CENTER AIRCRAFT DIVISION NAVAL AIR WARFARE CENTER WEAPONS DIVISION NAVAL UNDERSEA WARFARE CENTER...estimation for an electromagnetic wave propagating at radio frequencies through the earth’s atmosphere. Appendices contain descriptive material on the...of Radar Refraction Error Corrections, RCC 266-16 vii Acronyms BAE BAE Systems CRPL Central Radio Propagation Laboratory EM electromagnetic
Consciousness-Raising, Error Correction and Proofreading
O'Brien, Josephine
2015-01-01
The paper discusses the impact of developing a consciousness-raising approach in error correction at the sentence level to improve students' proofreading ability. Learners of English in a foreign language environment often rely on translation as a composing tool and while this may act as a scaffold and provide some support, it frequently leads to…
The Mathematics of Error Correcting Quantum Codes
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. The Mathematics of Error Correcting-Quantum Codes - Quantum Probability. K R Parthasarathy. General Article Volume 6 Issue 3 March 2001 pp 34-45. Fulltext. Click here to view fulltext PDF. Permanent link:
Quantum Steganography and Quantum Error-Correction
Shaw, Bilal A.
2010-01-01
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be…
Hagedorn, Peter
1982-01-01
Thoroughly revised and updated, the second edition of this concise text provides an engineer's view of non-linear oscillations, explaining the most important phenomena and solution methods. Non-linear descriptions are important because under certain conditions there occur large deviations from the behaviors predicted by linear differential equations. In some cases, completely new phenomena arise that are not possible in purely linear systems. The theory of non-linear oscillations thus has important applications in classical mechanics, electronics, communications, biology, and many other branches of science. In addition to many other changes, this edition has a new section on bifurcation theory, including Hopf's theorem.
Efficient Non Linear Loudspeakers
DEFF Research Database (Denmark)
Petersen, Bo R.; Agerkvist, Finn T.
2006-01-01
Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-06-23
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Goldmann tonometer error correcting prism: clinical evaluation
Directory of Open Access Journals (Sweden)
McCafferty S
2017-05-01
Full Text Available Sean McCafferty,1–3 Garrett Lim,2 William Duncan,2 Eniko T Enikov,4 Jim Schwiegerling,1 Jason Levine,1,3 Corin Kew3 1Department of Ophthalmology, College of Optical Science, University of Arizona, 2Intuor Technologies, 3Arizona Eye Consultants, 4Department of Aerospace and Mechanical, College of Engineering, University of Arizona, Tucson, AZ, USA Purpose: Clinically evaluate a modified applanating surface Goldmann tonometer prism designed to substantially negate errors due to patient variability in biomechanics.Methods: A modified Goldmann prism with a correcting applanation tonometry surface (CATS was mathematically optimized to minimize the intraocular pressure (IOP measurement error due to patient variability in corneal thickness, stiffness, curvature, and tear film adhesion force. A comparative clinical study of 109 eyes measured IOP with CATS and Goldmann prisms. The IOP measurement differences between the CATS and Goldmann prisms were correlated to corneal thickness, hysteresis, and curvature.Results: The CATS tonometer prism in correcting for Goldmann central corneal thickness (CCT error demonstrated a reduction to <±2 mmHg in 97% of a standard CCT population. This compares to only 54% with CCT error <±2 mmHg using the Goldmann prism. Equal reductions of ~50% in errors due to corneal rigidity and curvature were also demonstrated.Conclusion: The results validate the CATS prism’s improved accuracy and expected reduced sensitivity to Goldmann errors without IOP bias as predicted by mathematical modeling. The CATS replacement for the Goldmann prism does not change Goldmann measurement technique or interpretation. Keywords: glaucoma, tonometry, Goldmann, IOP, intraocular pressure, appalnation tonometer, corneal biomechanics, CATS tonometer, CCT, central corneal thickness, tonometer error
ecco: An error correcting comparator theory.
Ghirlanda, Stefano
2018-03-08
Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.
Black Holes, Holography, and Quantum Error Correction
CERN. Geneva
2017-01-01
How can it be that a local quantum field theory in some number of spacetime dimensions can "fake" a local gravitational theory in a higher number of dimensions? How can the Ryu-Takayanagi Formula say that an entropy is equal to the expectation value of a local operator? Why do such things happen only in gravitational theories? In this talk I will explain how a new interpretation of the AdS/CFT correspondence as a quantum error correcting code provides satisfying answers to these questions, and more generally gives a natural way of generating simple models of the correspondence. No familiarity with AdS/CFT or quantum error correction is assumed, but the former would still be helpful.
Tensor Networks and Quantum Error Correction
Ferris, Andrew J.; Poulin, David
2014-07-01
We establish several relations between quantum error correction (QEC) and tensor network (TN) methods of quantum many-body physics. We exhibit correspondences between well-known families of QEC codes and TNs, and demonstrate a formal equivalence between decoding a QEC code and contracting a TN. We build on this equivalence to propose a new family of quantum codes and decoding algorithms that generalize and improve upon quantum polar codes and successive cancellation decoding in a natural way.
Dimensional jump in quantum error correction
International Nuclear Information System (INIS)
Bombín, Héctor
2016-01-01
Topological stabilizer codes with different spatial dimensions have complementary properties. Here I show that the spatial dimension can be switched using gauge fixing. Combining 2D and 3D gauge color codes in a 3D qubit lattice, fault-tolerant quantum computation can be achieved with constant time overhead on the number of logical gates, up to efficient global classical computation, using only local quantum operations. Single-shot error correction plays a crucial role. (paper)
Dimensional jump in quantum error correction
Bombín, Héctor
2016-04-01
Topological stabilizer codes with different spatial dimensions have complementary properties. Here I show that the spatial dimension can be switched using gauge fixing. Combining 2D and 3D gauge color codes in a 3D qubit lattice, fault-tolerant quantum computation can be achieved with constant time overhead on the number of logical gates, up to efficient global classical computation, using only local quantum operations. Single-shot error correction plays a crucial role.
Triple-Error-Correcting Codec ASIC
Jones, Robert E.; Segallis, Greg P.; Boyd, Robert
1994-01-01
Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block of 256 to 512 bits. Helps reduce cost of transmitting data. Useful in many high-data-rate, bandwidth-limited communication systems such as; personal communication networks, cellular telephone networks, satellite communication systems, high-speed computing networks, broadcasting, and high-reliability data-communication links.
Muhammad Wazir; Hoon Lee Sang; Alam Khan; Maqbool Muhammad; Khan Gulzar
2012-01-01
The purpose of this work is to study dose non-linearity in medical linear accelerators used in conventional radiotherapy and intensity-modulated radiation therapy. Open fields, as well as the enhanced dynamic wedge ones, were used to collect data for 6 MV and 15 MV photon beams obtained from the VARIAN linear accelerator. Beam stability was checked and confirmed for different dose rates, energies, and application of enhanced dynamic wedge by calculating the charge per monitor unit. Moni...
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Systematic Error of Acoustic Particle Image Velocimetry and Its Correction
Directory of Open Access Journals (Sweden)
Mickiewicz Witold
2014-08-01
Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.
Diamond, Jared M.
1966-01-01
1. The relation between osmotic gradient and rate of osmotic water flow has been measured in rabbit gall-bladder by a gravimetric procedure and by a rapid method based on streaming potentials. Streaming potentials were directly proportional to gravimetrically measured water fluxes. 2. As in many other tissues, water flow was found to vary with gradient in a markedly non-linear fashion. There was no consistent relation between the water permeability and either the direction or the rate of water flow. 3. Water flow in response to a given gradient decreased at higher osmolarities. The resistance to water flow increased linearly with osmolarity over the range 186-825 m-osM. 4. The resistance to water flow was the same when the gall-bladder separated any two bathing solutions with the same average osmolarity, regardless of the magnitude of the gradient. In other words, the rate of water flow is given by the expression (Om — Os)/[Ro′ + ½k′ (Om + Os)], where Ro′ and k′ are constants and Om and Os are the bathing solution osmolarities. 5. Of the theories advanced to explain non-linear osmosis in other tissues, flow-induced membrane deformations, unstirred layers, asymmetrical series-membrane effects, and non-osmotic effects of solutes could not explain the results. However, experimental measurements of water permeability as a function of osmolarity permitted quantitative reconstruction of the observed water flow—osmotic gradient curves. Hence non-linear osmosis in rabbit gall-bladder is due to a decrease in water permeability with increasing osmolarity. 6. The results suggest that aqueous channels in the cell membrane behave as osmometers, shrinking in concentrated solutions of impermeant molecules and thereby increasing membrane resistance to water flow. A mathematical formulation of such a membrane structure is offered. PMID:5945254
International Nuclear Information System (INIS)
Garbet, X.; Mourgues, F.; Samain, A.
1987-01-01
Among the various instabilities which could explain the anomalous electron heat transport observed in tokamaks during additional heating, a microtearing turbulence is a reasonable candidate since it affects directly the magnetic topology. This turbulence may be described in a proper frame rotating around the majors axis by a static potential vector. In strong non linear regimes, the flow of electrons along the stochastic field lines induces a current. The point is to know whether this current can sustain the turbulence. The mechanisms of this self-consistency, involving the combined effects of the thermal diamagnetism and of the electric drift are presented here
Error Correction in Oral Classroom English Teaching
Jing, Huang; Xiaodong, Hao; Yu, Liu
2016-01-01
As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…
Joint Schemes for Physical Layer Security and Error Correction
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne
2018-03-01
When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.
The role of error correction in communicative second language teaching
H. Ludolph Botha
2013-01-01
According to recent rese~rch, correction of errors in both oral and written communication does little to a~d language proficiency in the second language. In the Natural Approach of Krashen and Terrell the emphasis is on the acquisition of informal communication. Because the message and the understanding of the message remain of utmost importance, error correction is avoided. In Suggestopedia where the focus is also on communication, error correction is avoided as it inhibits the pupil. Onlang...
Continuous-variable quantum error correction I: code comparison
Albert, Victor V.; Duivenvoorden, Kasper; Noh, Kyungjoo; Brierley, R. T.; Reinhold, Philip; Li, Linshu; Shen, Chao; Schoelkopf, R. J.; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
There are currently four types of non-trivial encodings of quantum information in a single bosonic mode: cat, binomial, and numerically optimized codes are designed to protect against bosonic loss errors, while GKP codes are designed to protect against bosonic displacement errors. These four code types have yet to be compared using the same error model. We report on a numerical comparison of the entanglement fidelity of all codes with respect to the lossy bosonic channel, given an average occupation number constraint and the optimal recovery operation. GKP codes demonstrate the highest fidelities for all but the smallest values of the boson loss probability (the parameter which quantifies the strength of amplitude damping). Although designed to protect against small displacement noise, GKP codes can offer a high degree of protection against bosonic loss errors. We also examine the performance of the four code types with respect to the combination of amplitude damping and a strong Kerr non-linearity.
Second Language Learners' Beliefs about Grammar Instruction and Error Correction
Loewen, Shawn; Li, Shaofeng; Fei, Fei; Thompson, Amy; Nakatsukasa, Kimi; Ahn, Seongmee; Chen, Xiaoqing
2009-01-01
Learner beliefs are an important individual difference in second language (L2) learning. Furthermore, an ongoing debate surrounds the role of grammar instruction and error correction in the L2 classroom. Therefore, this study investigated the beliefs of L2 learners regarding the controversial role of grammar instruction and error correction. A…
A Classroom Research Study on Oral Error Correction
Coskun, Abdullah
2010-01-01
This study has the main objective to present the findings of a small-scale classroom research carried out to collect data about my spoken error correction behaviors by means of self-observation. With this study, I aimed to analyze how and which spoken errors I corrected during a specific activity in a beginner's class. I used Lyster and Ranta's…
Raptor Codes for Use in Opportunistic Error Correction
Zijnge, T.; Goseling, Jasper; Weber, Jos H.; Schiphorst, Roelof; Shao, X.; Slump, Cornelis H.
2010-01-01
In this paper a Raptor code is developed and applied in an opportunistic error correction (OEC) layer for Coded OFDM systems. Opportunistic error correction [3] tries to recover information when it is available with the least effort. This is achieved by using Fountain codes in a COFDM system, which
Long Burst Error Correcting Codes, Phase I
National Aeronautics and Space Administration — Long burst error mitigation is an enabling technology for the use of Ka band for high rate commercial and government users. Multiple NASA, government, and commercial...
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
Energy efficiency of error correction on wireless systems
Havinga, Paul J.M.
1999-01-01
Since high error rates are inevitable to the wireless environment, energy-efficient error-control is an important issue for mobile computing systems. We have studied the energy efficiency of two different error correction mechanisms and have measured the efficiency of an implementation in software.
Comments on "A New Random-Error-Correction Code"
DEFF Research Database (Denmark)
Paaske, Erik
1979-01-01
This correspondence investigates the error propagation properties of six different systems using a (12, 6) systematic double-error-correcting convolutional encoder and a one-step majority-logic feedback decoder. For the generally accepted assumption that channel errors are much more likely to occur...
Student reflections following teacher correction of oral errors
Dirim, Nazlı
1999-01-01
Ankara : The Institute of Economics and Social Sciences of Bilkent University, 1999. Thesis (Master's) -- Bilkent University, 1999. Includes bibliographical references leaves69-71 The teacher’s correction techniques can determine how students approach language learning. In order to understand the effect of oral error correction on students, we should know how students feel. The purpose of this study was to investigate one teacher’s correction of students’ oral errors, the...
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
Flatfield correction errors due to spectral mismatching
Hagen, Nathan
2014-12-01
Flat field calibration of broadband imaging systems is widely used, and it has been said that users should try to make the spectrum of the flatfield calibration light source as close as possible to that of the measurement object. However, a quantitative analysis of the error induced by a mismatch of calibration and object spectra has been lacking. In order to develop this quantitative analysis, we provide a theoretical radiometric model for flatfield calibration and show how this spectral mismatching error arises. Simulations covering a variety of measurement scenarios indicate that spectral mismatching can create quantitative errors of up to a factor of 5 in situations that are regularly encountered by researchers performing quantitative work.
Operator quantum error-correcting subsystems for self-correcting quantum memories
International Nuclear Information System (INIS)
Bacon, Dave
2006-01-01
The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures
Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems
DEFF Research Database (Denmark)
Li, Bomin
This PhD thesis addresses the design and application of forward error correction (FEC) in high speed optical communications at the speed of 100 Gb/s and beyond. With the ever growing internet traffic, FEC has been considered as a strong and cost-effective way to improve the quality of transmission...... and their associated experimental demonstration and hardware implementation. The demonstrated high CG, flexibility, robustness and scalability reveal the important role of FEC techniques in the next generation high-speed, high-capacity, high performance and energy-efficient fiber-optic data transmission networks.......-complexity low-power-consumption FEC hardware implementation plays an important role in the next generation energy efficient networks. Thirdly, a joint research is required for FEC integrated applications as the error distribution in channels relies on many factors such as non-linearity in long distance optical...
75 FR 63106 - Correction of Administrative Errors
2010-10-14
... (Agency) proposes to use a constructed share price for retired Lifecycle funds in order to make error... Price The Agency currently offers five Lifecycle funds: L Income, L 2010, L 2020, L 2030, and L 2040... retiring the L 2010 Fund, the Agency will transfer all money invested in the L 2010 Fund to the L Income...
Detecting and Correcting Speech Rhythm Errors
Yurtbasi, Metin
2015-01-01
Every language has its own rhythm. Unlike many other languages in the world, English depends on the correct pronunciation of stressed and unstressed or weakened syllables recurring in the same phrase or sentence. Mastering the rhythm of English makes speaking more effective. Experiments have shown that we tend to hear speech as more rhythmical…
Error Correction Techniques in the EFL Class
Zublin, Roxana
2015-01-01
Errors are regarded as a natural part of the learning process, with the teacher performing the role of facilitator, providing help when necessary and creating a supportive environment in which students can obtain a successful enhanced learning outcome. They are significant indicators of the learning progress showing what learners have attained and what remains to be acquired and provide the language teacher the necessary information about how to deal with the problems that may arise and give ...
Energy Technology Data Exchange (ETDEWEB)
Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)
2011-11-10
Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in
VLSI architectures for modern error-correcting codes
Zhang, Xinmiao
2015-01-01
Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI
Error Correction for Non-Abelian Topological Quantum Computation
Directory of Open Access Journals (Sweden)
James R. Wootton
2014-03-01
Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.
Reactions of EFL Students to Oral Error Correction.
Bang, Young-Joo
1999-01-01
Investigated college students' attitudes and preferences toward error correction in the English-as-a-Foreign-Language (EFL) classroom. A questionnaire was administered to 100 EFL students enrolled in spoken-English classes at a university.(Author/VWL)
Correcting a Persistent Manhattan Project Statistical Error
Reed, Cameron
2011-04-01
In his 1987 autobiography, Major-General Kenneth Nichols, who served as the Manhattan Project's ``District Engineer'' under General Leslie Groves, related that when the Clinton Engineer Works at Oak Ridge, TN, was completed it was consuming nearly one-seventh (~ 14%) of the electric power being generated in the United States. This statement has been reiterated in several editions of a Department of Energy publication on the Manhattan Project. This remarkable claim has been checked against power generation and consumption figures available in Manhattan Engineer District documents, Tennessee Valley Authority records, and historical editions of the Statistical Abstract of the United States. The correct figure is closer to 0.9% of national generation. A speculation will be made as to the origin of Nichols' erroneous one-seventh figure.
Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.
Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian
2016-04-01
While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.
Three-Phase Text Error Correction Model for Korean SMS Messages
Byun, Jeunghyun; Park, So-Young; Lee, Seung-Wook; Rim, Hae-Chang
In this paper, we propose a three-phase text error correction model consisting of a word spacing error correction phase, a syllable-based spelling error correction phase, and a word-based spelling error correction phase. In order to reduce the text error correction complexity, the proposed model corrects text errors step by step. With the aim of correcting word spacing errors, spelling errors, and mixed errors in SMS messages, the proposed model tries to separately manage the word spacing error correction phase and the spelling error correction phase. For the purpose of utilizing both the syllable-based approach covering various errors and the word-based approach correcting some specific errors accurately, the proposed model subdivides the spelling error correction phase into the syllable-based phase and the word-based phase. Experimental results show that the proposed model can improve the performance by solving the text error correction problem based on the divide-and-conquer strategy.
Treelet Probabilities for HPSG Parsing and Error Correction
Ivanova, Angelina; van Noord, Gerardus; Calzolari, Nicoletta; al, et
2014-01-01
Most state-of-the-art parsers take an approach to produce an analysis for any input despite errors. However, small grammatical mistakes in a sentence often cause parser to fail to build a correct syntactic tree. Applications that can identify and correct mistakes during parsing are particularly
Correction of polarization error in scanned array weather radar antennas
Pang, C.; Hoogeboom, P.; Russchenberg, H.; Wang, T.; Dong, J.; Wang, X.
2014-01-01
In this paper, the polarization error correction of dual-polarized planar scanned array weather radar in alternately transmitting and simultaneously receiving (ATSR) mode is analyzed. A method based on point correction and a method taking the complete array patterns into account are discussed. To
Entanglement renormalization, quantum error correction, and bulk causality
Energy Technology Data Exchange (ETDEWEB)
Kim, Isaac H. [IBM T.J. Watson Research Center,1101 Kitchawan Rd., Yorktown Heights, NY (United States); Kastoryano, Michael J. [NBIA, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2017-04-07
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progressively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
Grammatical error correction using hybrid systems and type filtering
Felice, M; Yuan, Z; Andersen, ØE; Yannakoudakis, H; Kochmar, Ekaterina
2014-01-01
This paper describes our submission to the CoNLL 2014 shared task on grammatical error correction using a hybrid approach, which includes both a rule-based and an SMT system augmented by a large webbased language model. Furthermore, we demonstrate that correction type estimation can be used to remove unnecessary corrections, improving precision without harming recall. Our best hybrid system achieves state of-the-art results, ranking first on the original test set and second on the test set...
Non-linear Loudspeaker Unit Modelling
DEFF Research Database (Denmark)
Pedersen, Bo Rohde; Agerkvist, Finn T.
2008-01-01
Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of thr...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....
New class of photonic quantum error correction codes
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Directory of Open Access Journals (Sweden)
Chitra Jayathilake
2013-01-01
Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.
On the Design of Error-Correcting Ciphers
Directory of Open Access Journals (Sweden)
Mathur Chetan Nanjunda
2006-01-01
Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison
An investigation of error correcting techniques for OMV and AXAF
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Detecting and correcting hard errors in a memory array
Kalamatianos, John; John, Johnsy Kanjirapallil; Gelinas, Robert; Sridharan, Vilas K.; Nevius, Phillip E.
2015-11-19
Hard errors in the memory array can be detected and corrected in real-time using reusable entries in an error status buffer. Data may be rewritten to a portion of a memory array and a register in response to a first error in data read from the portion of the memory array. The rewritten data may then be written from the register to an entry of an error status buffer in response to the rewritten data read from the register differing from the rewritten data read from the portion of the memory array.
Error correction and degeneracy in surface codes suffering loss
International Nuclear Information System (INIS)
Stace, Thomas M.; Barrett, Sean D.
2010-01-01
Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.
Small refractive errors--their correction and practical importance.
Skrbek, Matej; Petrová, Sylvie
2013-04-01
Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.
Software for Correcting the Dynamic Error of Force Transducers
Directory of Open Access Journals (Sweden)
Naoki Miyashita
2014-07-01
Full Text Available Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM, in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper.
Levenshtein error-correcting barcodes for multiplexed DNA sequencing.
Buschmann, Tilo; Bystrykh, Leonid V
2013-09-11
High-throughput sequencing technologies are improving in quality, capacity and costs, providing versatile applications in DNA and RNA research. For small genomes or fraction of larger genomes, DNA samples can be mixed and loaded together on the same sequencing track. This so-called multiplexing approach relies on a specific DNA tag or barcode that is attached to the sequencing or amplification primer and hence appears at the beginning of the sequence in every read. After sequencing, each sample read is identified on the basis of the respective barcode sequence.Alterations of DNA barcodes during synthesis, primer ligation, DNA amplification, or sequencing may lead to incorrect sample identification unless the error is revealed and corrected. This can be accomplished by implementing error correcting algorithms and codes. This barcoding strategy increases the total number of correctly identified samples, thus improving overall sequencing efficiency. Two popular sets of error-correcting codes are Hamming codes and Levenshtein codes. Levenshtein codes operate only on words of known length. Since a DNA sequence with an embedded barcode is essentially one continuous long word, application of the classical Levenshtein algorithm is problematic. In this paper we demonstrate the decreased error correction capability of Levenshtein codes in a DNA context and suggest an adaptation of Levenshtein codes that is proven of efficiently correcting nucleotide errors in DNA sequences. In our adaption we take the DNA context into account and redefine the word length whenever an insertion or deletion is revealed. In simulations we show the superior error correction capability of the new method compared to traditional Levenshtein and Hamming based codes in the presence of multiple errors. We present an adaptation of Levenshtein codes to DNA contexts capable of correction of a pre-defined number of insertion, deletion, and substitution mutations. Our improved method is additionally capable
The role of error correction in communicative second language teaching
Directory of Open Access Journals (Sweden)
H. Ludolph Botha
2013-02-01
Full Text Available According to recent rese~rch, correction of errors in both oral and written communication does little to a~d language proficiency in the second language. In the Natural Approach of Krashen and Terrell the emphasis is on the acquisition of informal communication. Because the message and the understanding of the message remain of utmost importance, error correction is avoided. In Suggestopedia where the focus is also on communication, error correction is avoided as it inhibits the pupil. Onlangse navorsing het getoon dat die verbetering van foute in beide mondelinge en skriftelike kommunikasie min bydra tot beter taalvaardigheid in die tweede taal. In die Natural Approach van Krashen en Terrell val die klem op die verwerwing van informele kommunikasie, want die boodskap en die verstaan daarvan bly verreweg die belangrikste; die verbetering van foute word vermy. In Suggestopedagogiek, waar die klem ook op kommunikasie val, word die verbetering van foute vermy omdat dit die leerling beperk.
Chitra Jayathilake
2013-01-01
Error correction in ESL (English as a Second Language) classes has been a focal phenomenon in SLA (Second Language Acquisition) research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates,...
Scalable error correction in distributed ion trap computers
International Nuclear Information System (INIS)
Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.
2006-01-01
A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment
Phase correction and error estimation in InSAR time series analysis
Zhang, Y.; Fattahi, H.; Amelung, F.
2017-12-01
During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same
Small Refractive Errors – Their Correction and Practical Importance
Skrbek, Matej; Petrová, Sylvie
2013-01-01
Small refractive errors present a group of specific far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren’t exhibited by loss of the visual acuity1. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acui...
Passive quantum error correction of linear optics networks through error averaging
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
DEFF Research Database (Denmark)
Andersen, Steffen; Harrison, Glenn W.; Hole, Arne Risa
2012-01-01
We develop an extension of the familiar linear mixed logit model to allow for the direct estimation of parametric non-linear functions defined over structural parameters. Classic applications include the estimation of coefficients of utility functions to characterize risk attitudes and discounting...
Method for decoupling error correction from privacy amplification
International Nuclear Information System (INIS)
Lo, Hoi-Kwong
2003-01-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
2013-01-01
We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Unde...... versions that are simple to compute. A simulation study shows that the finite-sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full......We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...
Phase error correction in wavefront curvature sensing via phase retrieval
DEFF Research Database (Denmark)
Almoro, Percival; Hanson, Steen Grüner
2008-01-01
Wavefront curvature sensing with phase error correction system is carried out using phase retrieval based on a partially-developed volume speckle field. Various wavefronts are reconstructed: planar, spherical, cylindrical, and a wavefront passing through the side of a bare optical fiber. Spurious...
The Mathematics of Error Correcting Quan tum Codes
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 4. The Mathematics of Error Correcting Quantum Codes - Quantum Coding. K R Parthasarathy. General Article Volume 6 Issue 4 April 2001 pp 38-51. Fulltext. Click here to view fulltext PDF. Permanent link:
Topological quantum error correction with optimal encoding rate
International Nuclear Information System (INIS)
Bombin, H.; Martin-Delgado, M. A.
2006-01-01
We prove the existence of topological quantum error correcting codes with encoding rates k/n asymptotically approaching the maximum possible value. Explicit constructions of these topological codes are presented using surfaces of arbitrary genus. We find a class of regular toric codes that are optimal. For physical implementations, we present planar topological codes
Communication Systems Simulator with Error Correcting Codes Using MATLAB
Gomez, C.; Gonzalez, J. E.; Pardo, J. M.
2003-01-01
In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…
Enhancing cryptographic primitives with techniques from error correcting codes
Preneel, Bart; Dodunekov, Stefan; Rijmen, Vincent; Nikova, S.I.
The NATO Advanced Research Workshop on Enhancing Cryptographic Primitives with Techniques from Error Correcting Codes has been organized in Veliko Tarnovo, Bulgaria, on October 6-9, 2008 by the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences in cooperation with COSIC,
Quantum algorithms and quantum maps - implementation and error correction
International Nuclear Information System (INIS)
Alber, G.; Shepelyansky, D.
2005-01-01
Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)
The Mathematics of Error Correcting Quan tum Codes
Indian Academy of Sciences (India)
The Mathematics of Error Correcting. Quan tum Codes. K R Parthasarathy is INSA. C V Raman Research. Professor at Indian. Statistical Institute, Delhi. His interests are quantum probability, mathematical foundations of quantum mechanics and probability theory. He is the author of two classic books in probability theory and ...
ERROR CORRECTION, CO-INTEGRATION AND IMPORT DEMAND ...
African Journals Online (AJOL)
Abstract. The objective of this study is to determine empirically Import Demand equation in Nigeria using Error Correction and Cointegration techniques. All the variables employed in this study were found stationary at first difference using Augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) unit root test. Empirical ...
A Cointegration And Error Correction Approach To Broad Money ...
African Journals Online (AJOL)
This study considered the stability of broad money demand function in Nigeria using data for 1970 to 2004. The study applied the Cointegration and error correction approach The Johansen Cointegration test shows that long run equilibrium relationship exists between broad money demand and its determinants. While the ...
Improvement of Thai error correction system by memetic algorithm
Directory of Open Access Journals (Sweden)
Krit Somkantha
2014-09-01
Full Text Available This paper presents an efficient technique for improving the efficiency of Thai error correction system by using memetic algorithm. In this paper, the token passing algorithm is used for constructing word graph and the language model is used checking the correct sentence. The correction process starts with word graph construction from token passing algorithm, then the correct sentence are searched by memetic algorithm with the best fitness function from language model. For a long sentence from the token passing algorithm, a search space is very huge which can be resolved by using memetic algorithm. The memetic algorithm is used for searching the correct sentence in order to reduce the analysis time. The performance of the proposed method are evaluated and compared to the full search and genetic algorithm. From the experimental results show that the proposed method performs very well and yields better performance more than the compared method. The proposed method can search the best sentence accurately and quickly.
On the synthesis of DNA error correcting codes.
Ashlock, Daniel; Houghten, Sheridan K; Brown, Joseph Alexander; Orth, John
2012-10-01
DNA error correcting codes over the edit metric consist of embeddable markers for sequencing projects that are tolerant of sequencing errors. When a genetic library has multiple sources for its sequences, use of embedded markers permit tracking of sequence origin. This study compares different methods for synthesizing DNA error correcting codes. A new code-finding technique called the salmon algorithm is introduced and used to improve the size of best known codes in five difficult cases of the problem, including the most studied case: length six, distance three codes. An updated table of the best known code sizes with 36 improved values, resulting from three different algorithms, is presented. Mathematical background results for the problem from multiple sources are summarized. A discussion of practical details that arise in application, including biological design and decoding, is also given in this study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Secure and Reliable IPTV Multimedia Transmission Using Forward Error Correction
Directory of Open Access Journals (Sweden)
Chi-Huang Shih
2012-01-01
Full Text Available With the wide deployment of Internet Protocol (IP infrastructure and rapid development of digital technologies, Internet Protocol Television (IPTV has emerged as one of the major multimedia access techniques. A general IPTV transmission system employs both encryption and forward error correction (FEC to provide the authorized subscriber with a high-quality perceptual experience. This two-layer processing, however, complicates the system design in terms of computational cost and management cost. In this paper, we propose a novel FEC scheme to ensure the secure and reliable transmission for IPTV multimedia content and services. The proposed secure FEC utilizes the characteristics of FEC including the FEC-encoded redundancies and the limitation of error correction capacity to protect the multimedia packets against the malicious attacks and data transmission errors/losses. Experimental results demonstrate that the proposed scheme obtains similar performance compared with the joint encryption and FEC scheme.
Farahani, Ali Akbar; Salajegheh, Soory
2015-01-01
Although the provision of error correction is common in education, there are controversies regarding "when" correction is most effective and why it is effective. This study investigated the differences between Iranian English as a foreign language (EFL) teachers and learners regarding their perspectives towards the timeline of error…
Singh, N N
1990-04-01
The effects of two error-correction procedures on oral reading errors and a control condition were compared in an alternating treatments design with three students who were moderately mentally retarded. The two procedures evaluated were word supply and sentence repeat. The teacher supplied the reader with the correct word immediately after each student error during the word-supply condition. During the sentence-repeat condition, the teacher supplied the correct word immediately after each student error, required the student to repeat the correct word, complete reading the sentence, and then reread the entire sentence. Both word-supply and sentence-repeat procedures were effective in reducing oral reading errors when compared to a no-intervention control condition, but sentence repeat was superior to word supply. In addition, a similar relationship was found between the two procedures when the students were tested for retention on the same reading passages a week later. These results show that sentence repeat is more effective than is the commonly used word-supply procedure in remediating the oral reading errors of students with moderate mental retardation.
Position error correcting method and apparatus for industrial robot
Energy Technology Data Exchange (ETDEWEB)
Okada, T.; Mohri, S.
1987-06-02
A method is described of correcting a position error of an industrial robot. The method comprises: operating the industrial robot according to position command values, thereby moving a measurement point provided on the industrial robot to a first position; measuring the position values of the first position of the measurement point with a three-dimensional measuring unit to obtain three-dimensional coordinates defining the measurement point; computing a position error of the industrial robot by defining the coordinates of the measurement point in a first equation incorporating parameters of the robot contributing to the position error, forming partial differential equations from the first equation for each of the parameters contributing to the position error.
Entanglement and Quantum Error Correction with Superconducting Qubits
Reed, Matthew
2015-03-01
Quantum information science seeks to take advantage of the properties of quantum mechanics to manipulate information in ways that are not otherwise possible. Quantum computation, for example, promises to solve certain problems in days that would take a conventional supercomputer the age of the universe to decipher. This power does not come without a cost however, as quantum bits are inherently more susceptible to errors than their classical counterparts. Fortunately, it is possible to redundantly encode information in several entangled qubits, making it robust to decoherence and control imprecision with quantum error correction. I studied one possible physical implementation for quantum computing, employing the ground and first excited quantum states of a superconducting electrical circuit as a quantum bit. These ``transmon'' qubits are dispersively coupled to a superconducting resonator used for readout, control, and qubit-qubit coupling in the cavity quantum electrodynamics (cQED) architecture. In this talk I will give an general introduction to quantum computation and the superconducting technology that seeks to achieve it before explaining some of the specific results reported in my thesis. One major component is that of the first realization of three-qubit quantum error correction in a solid state device, where we encode one logical quantum bit in three entangled physical qubits and detect and correct phase- or bit-flip errors using a three-qubit Toffoli gate. My thesis is available at arXiv:1311.6759.
Atmospheric Error Correction of the Laser Beam Ranging
Directory of Open Access Journals (Sweden)
J. Saydi
2014-01-01
Full Text Available Atmospheric models based on surface measurements of pressure, temperature, and relative humidity have been used to increase the laser ranging accuracy by ray tracing. Atmospheric refraction can cause significant errors in laser ranging systems. Through the present research, the atmospheric effects on the laser beam were investigated by using the principles of laser ranging. Atmospheric correction was calculated for 0.532, 1.3, and 10.6 micron wavelengths through the weather conditions of Tehran, Isfahan, and Bushehr in Iran since March 2012 to March 2013. Through the present research the atmospheric correction was computed for meteorological data in base of monthly mean. Of course, the meteorological data were received from meteorological stations in Tehran, Isfahan, and Bushehr. Atmospheric correction was calculated for 11, 100, and 200 kilometers laser beam propagations under 30°, 60°, and 90° rising angles for each propagation. The results of the study showed that in the same months and beam emission angles, the atmospheric correction was most accurate for 10.6 micron wavelength. The laser ranging error was decreased by increasing the laser emission angle. The atmospheric correction with two Marini-Murray and Mendes-Pavlis models for 0.532 nm was compared.
Thermalization, Error Correction, and Memory Lifetime for Ising Anyon Systems
Directory of Open Access Journals (Sweden)
Courtney G. Brell
2014-09-01
Full Text Available We consider two-dimensional lattice models that support Ising anyonic excitations and are coupled to a thermal bath. We propose a phenomenological model for the resulting short-time dynamics that includes pair creation, hopping, braiding, and fusion of anyons. By explicitly constructing topological quantum error-correcting codes for this class of system, we use our thermalization model to estimate the lifetime of the quantum information stored in the encoded spaces. To decode and correct errors in these codes, we adapt several existing topological decoders to the non-Abelian setting. We perform large-scale numerical simulations of these two-dimensional Ising anyon systems and find that the thresholds of these models range from 13% to 25%. To our knowledge, these are the first numerical threshold estimates for quantum codes without explicit additive structure.
Error-finding and error-correcting methods for the start-up of the SLC
International Nuclear Information System (INIS)
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.
1987-02-01
During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
Fault-tolerant error correction with the gauge color code
Brown, Benjamin J.; Nickerson, Naomi H.; Browne, Dan E.
2016-01-01
The constituent parts of a quantum computer are inherently vulnerable to errors. To this end, we have developed quantum error-correcting codes to protect quantum information from noise. However, discovering codes that are capable of a universal set of computational operations with the minimal cost in quantum resources remains an important and ongoing challenge. One proposal of significant recent interest is the gauge color code. Notably, this code may offer a reduced resource cost over other well-studied fault-tolerant architectures by using a new method, known as gauge fixing, for performing the non-Clifford operations that are essential for universal quantum computation. Here we examine the gauge color code when it is subject to noise. Specifically, we make use of single-shot error correction to develop a simple decoding algorithm for the gauge color code, and we numerically analyse its performance. Remarkably, we find threshold error rates comparable to those of other leading proposals. Our results thus provide the first steps of a comparative study between the gauge color code and other promising computational architectures. PMID:27470619
The contour method cutting assumption: error minimization and correction
Energy Technology Data Exchange (ETDEWEB)
Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL
2010-01-01
The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.
Non-Linear Systems Identification Using Neural Networks
Chen, S.; Billings, S.A.; Grant, P.M.
1989-01-01
Multi-layered neural networks offer an exciting alternative for modelling complex non-linear systems. This paper investigates the identification of discrete-time non-linear systems using neural networks with a single hidden layer. New parameter estimation algorithms are derived for the neural network model based on a prediction error formulation and the application to both simulated and real data is included to demonstrate the effectiveness of the neural network approach.
Coordinated joint motion control system with position error correction
Danko, George [Reno, NV
2011-11-22
Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two-joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.
Topological quantum error correction in the Kitaev honeycomb model
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate
Polio, Charlene
2012-01-01
The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…
Walz, Joel C.
A review of literature on error correction shows a lack of agreement on the benefits of error correction in second language learning and confusion on which errors to correct and the approach to take to correction of both oral and written language. This monograph deals with these problems and provides examples of techniques in English, French,…
Distance error correction for time-of-flight cameras
Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian
2017-06-01
The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.
A new controller for the JET error field correction coils
International Nuclear Information System (INIS)
Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.
2005-01-01
This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Quantum secret sharing based on quantum error-correcting codes
International Nuclear Information System (INIS)
Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu
2011-01-01
Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)
PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM
Directory of Open Access Journals (Sweden)
David Kaluge
2017-03-01
Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.
Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2013-01-01
In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...... of the optical light path and the required amount of throughput going towards the destination node. The result is a dynamic FEC, which can be used to optimize the connections for throughput and/or energy efficiency, depending on the current demand....
Optimal quantum error correcting codes from absolutely maximally entangled states
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Semantically Secure Symmetric Encryption with Error Correction for Distributed Storage
Directory of Open Access Journals (Sweden)
Juha Partala
2017-01-01
Full Text Available A distributed storage system (DSS is a fundamental building block in many distributed applications. It applies linear network coding to achieve an optimal tradeoff between storage and repair bandwidth when node failures occur. Additively homomorphic encryption is compatible with linear network coding. The homomorphic property ensures that a linear combination of ciphertext messages decrypts to the same linear combination of the corresponding plaintext messages. In this paper, we construct a linearly homomorphic symmetric encryption scheme that is designed for a DSS. Our proposal provides simultaneous encryption and error correction by applying linear error correcting codes. We show its IND-CPA security for a limited number of messages based on binary Goppa codes and the following assumption: when dividing a scrambled generator matrix G^ into two parts G1^ and G2^, it is infeasible to distinguish G2^ from random and to find a statistical connection between G1^ and G2^. Our infeasibility assumptions are closely related to those underlying the McEliece public key cryptosystem but are considerably weaker. We believe that the proposed problem has independent cryptographic interest.
Laser-error-correction control unit for machine tools
Energy Technology Data Exchange (ETDEWEB)
Burleson, R.R.
1978-05-23
An ultraprecision machining capability is needed for the laser fusion program. For this work, a precision air-bearing spindle has been mounted horizontally on a modified vertical column of a Moore Number 3 measuring machine base located in a development laboratory at the Oak Ridge Y-12 Plant. An open-loop control system previously installed on this machine was inadequate to meet the upcoming requirements since accuracy is limited to 0.5 ..mu..m by the errors in the machine's gears and leadscrew. A new controller was needed that could monitor the actual position of the machine and perform real-time error correction on the programmed tool path. It was necessary that this project: (1) attain an optimum tradeoff between hardware and software; (2) use a modular design for easy maintenance; (3) use a standard NC tape service; (4) drive the x and y axes with a positioning resolution of 5.08 nm and a feedback resolution of 10 nm; (5) drive the x and y axis motors at a velocity of 0.05 cm/sec in the contouring mode and 0.18 cm/sec in the positioning mode; (6) eliminate the possibility of tape-reader errors; and (7) allow editing of the part description data. The work that was done to develop and install the new machine controller is described.
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
for linearity is of particular interest as parameters of non-linear components vanish under the null. To solve the latter type of testing, we use the so-called sup tests, which here requires development of new (uniform) weak convergence results. These results are potentially useful in general for analysis...
Topics in quantum cryptography, quantum error correction, and channel simulation
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel
A Secure RFID Authentication Protocol Adopting Error Correction Code
Directory of Open Access Journals (Sweden)
Chien-Ming Chen
2014-01-01
Full Text Available RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance.
A secure RFID authentication protocol adopting error correction code.
Chen, Chien-Ming; Chen, Shuai-Min; Zheng, Xinying; Chen, Pei-Yu; Sun, Hung-Min
2014-01-01
RFID technology has become popular in many applications; however, most of the RFID products lack security related functionality due to the hardware limitation of the low-cost RFID tags. In this paper, we propose a lightweight mutual authentication protocol adopting error correction code for RFID. Besides, we also propose an advanced version of our protocol to provide key updating. Based on the secrecy of shared keys, the reader and the tag can establish a mutual authenticity relationship. Further analysis of the protocol showed that it also satisfies integrity, forward secrecy, anonymity, and untraceability. Compared with other lightweight protocols, the proposed protocol provides stronger resistance to tracing attacks, compromising attacks and replay attacks. We also compare our protocol with previous works in terms of performance.
Detecting Positioning Errors and Estimating Correct Positions by Moving Window
Song, Ha Yoon; Lee, Jun Seok
2015-01-01
In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282
Hepatitis B Virus Capsid Completion Occurs through Error Correction.
Lutomski, Corinne A; Lyktey, Nicholas A; Zhao, Zhongchao; Pierson, Elizabeth E; Zlotnick, Adam; Jarrold, Martin F
2017-11-22
Understanding capsid assembly is important because of its role in virus lifecycles and in applications to drug discovery and nanomaterial development. Many virus capsids are icosahedral, and assembly is thought to occur by the sequential addition of capsid protein subunits to a nucleus, with the final step completing the icosahedron. Almost nothing is known about the final (completion) step because the techniques usually used to study capsid assembly lack the resolution. In this work, charge detection mass spectrometry (CDMS) has been used to track the assembly of the T = 4 hepatitis B virus (HBV) capsid in real time. The initial assembly reaction occurs rapidly, on the time scale expected from low resolution measurements. However, CDMS shows that many of the particles generated in this process are defective and overgrown, containing more than the 120 capsid protein dimers needed to form a perfect T = 4 icosahedron. The defective and overgrown capsids self-correct over time to the mass expected for a perfect T = 4 capsid. Thus, completion is a distinct phase in the assembly reaction. Capsid completion does not necessarily occur by inserting the last building block into an incomplete, but otherwise perfect icosahedron. The initial assembly reaction can be predominently imperfect, and completion involves the slow correction of the accumulated errors.
THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING
Directory of Open Access Journals (Sweden)
Ketut Santi Indriani
2015-05-01
Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
Chaiwat Tantarangsee
2016-01-01
The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the cours...
HOO 2012 Error Recognition and Correction Shared Task: Cambridge University Submission Report
Kochmar, Ekaterina; Andersen, Oeistein Edvin; Briscoe, Edward John
2012-01-01
Previous work on automated error recognition and correction of texts written by learners of English as a Second Language has demonstrated experimentally that training classifiers on error-annotated ESL text generally outperforms training on native text alone and that adaptation of error correction models to the native language (L1) of the writer improves performance. Nevertheless, most extant models have poor precision, particularly when attempting error correction, and this limits their usef...
DEFF Research Database (Denmark)
Du, Yigang
are performed under water by two geometrical focused piston transducers. It can be seen that the time pulses measured from a 0.5 inch diameter transducer and linearly simulated using the ASA are fairly comparable. The root mean square (RMS) error for the second harmonic field simulated by the ASA is 10.......3% relative to the measurement from a 1 inch diameter transducer. A preliminary study for harmonic imaging using synthetic aperture sequential beamforming (SASB) has been demonstrated. A wire phantom underwater measurement is made by an experimental synthetic aperture real-time ultrasound scanner (SARUS...
On the non-linear scale of cosmological perturbation theory
International Nuclear Information System (INIS)
Blas, Diego; Garny, Mathias; Konstandin, Thomas
2013-04-01
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
On the non-linear scale of cosmological perturbation theory
Energy Technology Data Exchange (ETDEWEB)
Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-04-15
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
How EFL Students Can Use Google to Correct Their "Untreatable" Written Errors
Geiller, Luc
2014-01-01
This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several "untreatable" written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback…
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
The Importance of Non-Linearity on Turbulent Fluxes
DEFF Research Database (Denmark)
Rokni, Masoud
2007-01-01
Two new non-linear models for the turbulent heat fluxes are derived and developed from the transport equation of the scalar passive flux. These models are called as non-linear eddy diffusivity and non-linear scalar flux. The structure of these models is compared with the exact solution which...... is derived from the Cayley-Hamilton theorem and contains a three term-basis plus a non-linear term due to scalar fluxes. In order to study the performance of the model itself, all other turbulent quantities are taken from a DNS channel flow data-base and thus the error source has been minimized. The results...... are compared with the DNS channel flow and good agreement is achieved. It has been shown that the non-linearity parts of the models are important to capture the true path of the streamwise scalar fluxes. It has also been shown that one of model constant should have negative sign rather than positive, which had...
Neural Networks for Non-linear Control
DEFF Research Database (Denmark)
Sørensen, O.
1994-01-01
This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....
Non-linear finite element modeling
DEFF Research Database (Denmark)
Mikkelsen, Lars Pilgaard
The note is written for courses in "Non-linear finite element method". The note has been used by the author teaching non-linear finite element modeling at Civil Engineering at Aalborg University, Computational Mechanics at Aalborg University Esbjerg, Structural Engineering at the University...
Simulation of non-linear ultrasound fields
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Fox, Paul D.; Wilhjelm, Jens E.
2002-01-01
An approach for simulating non-linear ultrasound imaging using Field II has been implemented using the operator splitting approach, where diffraction, attenuation, and non-linear propagation can be handled individually. The method uses the Earnshaw/Poisson solution to Burgcrs' equation for the non...
Non-linear realizations and bosonic branes
International Nuclear Information System (INIS)
West, P.
2001-01-01
In this very short note, following hep-th/0001216, we express the well known bosonic brane as a non-linear realization. The reader may also consult hep-th/9912226, 0001216 and 0005270 where the branes of M theory are constructed as a non-linear realisation. The automorphisms of the supersymmetry algebra play an essential role. (author)
Zum Problem der muendlichen Fehlerkorrektur (On the Problem of Oral Correction of Errors)
Wullen, T. Lothar
1975-01-01
Discrimination among errors is based on the degree of hindrance to understanding. The importance of error correction is emphasized, as is promptness in correction, with many students participating. Various possibilities for correction by students and teacher are presented. (Text is in German.) (IFS/WGA)
International Nuclear Information System (INIS)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s
Performance Errors in Weight Training and Their Correction.
Downing, John H.; Lander, Jeffrey E.
2002-01-01
Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…
Energy efficiency of error correcting mechanisms for wireless communications
Havinga, Paul J.M.
We consider the energy efficiency of error control mechanisms for wireless communication. Since high error rates are inevitable to the wireless environment, energy efficient error control is an important issue for mobile computing systems. Although good designed retransmission schemes can be optimal
Allam, Amin
2015-07-14
Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.
Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE
Directory of Open Access Journals (Sweden)
Patrick SAINT-DIZIER
2015-12-01
Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.
Correction of Cadastral Error: Either the Right or Obligation of the Person Concerned?
Directory of Open Access Journals (Sweden)
Magdenko A. Y.
2014-07-01
Full Text Available The article is devoted to the institute of cadastral error. Some questions and problems of cadastral error corrections are considered. The material is based on current legislation and judicial practice.
Reed-Solomon error-correction as a software patch mechanism.
Energy Technology Data Exchange (ETDEWEB)
Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2013-11-01
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
Directory of Open Access Journals (Sweden)
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
Leijten, Marielle; Van Waes, Luuk; Ransdell, Sarah
2010-01-01
Error analysis involves detecting, diagnosing, and correcting discrepancies between the text produced so far (TPSF) and the writers mental representation of what the text should be. The use of different writing modes, like keyboard-based word processing and speech recognition, causes different type of errors during text production. While many…
ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES
Maria Corazon Saturnina A Castro
2017-01-01
Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices. This paper poses the major problem: ...
The Effectiveness of Implicit and Explicit Error Correction on Learners' Performance
Varnosfadrani, Azizollah Dabaghi; Basturkmen, Helen
2009-01-01
The study looked at the effects of correction of learners' errors on learning of grammatical features. In particular, the manner of correction (explicit vs. implicit correction) was investigated. The study also focussed on the effectiveness of explicit and implicit correction of developmental early vs. developmental late features. Fifty-six…
Non-Linear Algebra and Bogolubov's Recursion
Morozov, A.; Serbyn, M.
2007-01-01
Numerous examples are given of application of Bogolubov's forest formula to iterative solutions of various non-linear equations: one and the same formula describes everything, from ordinary quadratic equation to renormalization in quantum field theory.
Detecting and correcting partial errors: Evidence for efficient control without conscious access.
Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B
2014-09-01
Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.
Correlations and Non-Linear Probability Models
DEFF Research Database (Denmark)
Breen, Richard; Holm, Anders; Karlson, Kristian Bernt
2014-01-01
Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations betwee...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....
Extending Lifetime of Wireless Sensor Networks using Forward Error Correction
DEFF Research Database (Denmark)
Donapudi, S U; Obel, C O; Madsen, Jan
2006-01-01
Communication between nodes in wireless sensor networks (WSN) is susceptible to transmission errors caused by low signal strength or interference. These errors manifest themselves as lost or corrupt packets. This often leads to retransmission, which in turn results in increased power consumption...
Oral Reading Error Correction Behavior and Cloze Performance.
Page, William D.
1979-01-01
Describes a study that assessed how correction behavior in the oral reading of 48 elementary school students related to comprehension, as measured by cloze performance. Indicates that new measures of reading comprehension are needed, as correction behavior acts as an indicator of comprehension. (TJ)
Realization of three-qubit quantum error correction with superconducting circuits.
Reed, M D; DiCarlo, L; Nigg, S E; Sun, L; Frunzio, L; Girvin, S M; Schoelkopf, R J
2012-02-01
Quantum computers could be used to solve certain problems exponentially faster than classical computers, but are challenging to build because of their increased susceptibility to errors. However, it is possible to detect and correct errors without destroying coherence, by using quantum error correcting codes. The simplest of these are three-quantum-bit (three-qubit) codes, which map a one-qubit state to an entangled three-qubit state; they can correct any single phase-flip or bit-flip error on one of the three qubits, depending on the code used. Here we demonstrate such phase- and bit-flip error correcting codes in a superconducting circuit. We encode a quantum state, induce errors on the qubits and decode the error syndrome--a quantum state indicating which error has occurred--by reversing the encoding process. This syndrome is then used as the input to a three-qubit gate that corrects the primary qubit if it was flipped. As the code can recover from a single error on any qubit, the fidelity of this process should decrease only quadratically with error probability. We implement the correcting three-qubit gate (known as a conditional-conditional NOT, or Toffoli, gate) in 63 nanoseconds, using an interaction with the third excited state of a single qubit. We find 85 ± 1 per cent fidelity to the expected classical action of this gate, and 78 ± 1 per cent fidelity to the ideal quantum process matrix. Using this gate, we perform a single pass of both quantum bit- and phase-flip error correction and demonstrate the predicted first-order insensitivity to errors. Concatenation of these two codes in a nine-qubit device would correct arbitrary single-qubit errors. In combination with recent advances in superconducting qubit coherence times, this could lead to scalable quantum technology.
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
The Effect of Error Correction on L2 Grammar Knowledge and Oral Proficiency.
Dekeyser, Robert M.
1993-01-01
The efficiency of oral error correction was investigated as a function of 35 Dutch-speaking high school seniors' individual characteristics of aptitude, motivation, anxiety, and previous achievement. Results were mixed but generally suggest that error correction does not lead to across-the-board improvement of achievement. (Contains 55…
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
Antecedent Control of Oral Reading Errors and Self-Corrections by Mentally Retarded Children.
Singh, Nirbhay N.; Singh, Judy
1984-01-01
The study evaluated effects of manipulating two antecedent stimulus events with respect to oral reading errors and self-corrections of four mentally retarded adolescents. Oral reading errors decreased and self-corrections increased when the children previewed the target text with their teacher before reading it orally. (Author/CL)
Supporting Dictation Speech Recognition Error Correction: The Impact of External Information
Shi, Yongmei; Zhou, Lina
2011-01-01
Although speech recognition technology has made remarkable progress, its wide adoption is still restricted by notable effort made and frustration experienced by users while correcting speech recognition errors. One of the promising ways to improve error correction is by providing user support. Although support mechanisms have been proposed for…
An upper bound on the number of errors corrected by a convolutional code
DEFF Research Database (Denmark)
Justesen, Jørn
2000-01-01
The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....
Strategies for Detecting and Correcting Errors in Accounting Problems.
James, Marianne L.
2003-01-01
Reviews common errors in accounting tests that students commit resulting from deficiencies in fundamental prior knowledge, ineffective test taking, and inattention to detail and provides solutions to the problems. (JOW)
Continuous-variable quantum error correction II: the Gottesman-Kitaev-Preskill code
Noh, Kyungjoo; Duivenvoorden, Kasper; Albert, Victor V.; Brierley, R. T.; Reinhold, Philip; Li, Linshu; Shen, Chao; Schoelkopf, R. J.; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
Recently, various single mode bosonic quantum error-correcting codes (e.g., cat codes and binomial codes) have been developed to correct errors due to excitation loss of bosonic systems. Meanwhile, the Gottesman-Kitaev-Preskill (GKP) codes do not follow the simple design guidelines of cat and binomial codes, but nevertheless demonstrate excellent performance in correcting bosonic loss errors. To understand the underlying mechanism of the GKP codes, we represent them using a superposition of coherent states, investigate their performance as approximate error-correcting codes, and identify the dominant types of uncorrectable errors. This understanding will help us to develop more robust codes against bosonic loss errors, which will be useful for robust quantum information processing with bosonic systems.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Pencil kernel correction and residual error estimation for quality-index-based dose calculations
International Nuclear Information System (INIS)
Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael
2006-01-01
Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method
Upper bounds on the number of errors corrected by a convolutional code
DEFF Research Database (Denmark)
Justesen, Jørn
2004-01-01
We derive upper bounds on the weights of error patterns that can be corrected by a convolutional code with given parameters, or equivalently we give bounds on the code rate for a given set of error patterns. The bounds parallel the Hamming bound for block codes by relating the number of error...... patterns to the number of distinct syndromes....
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
The role of extensive recasts in error detection and correction by adult ESL students
Directory of Open Access Journals (Sweden)
Laura Hawkes
2016-03-01
Full Text Available Most of the laboratory studies on recasts have examined the role of intensive recasts provided repeatedly on the same target structure. This is different from the original definition of recasts as the reformulation of learner errors as they occur naturally and spontaneously in the course of communicative interaction. Using a within-group research design and a new testing methodology (video-based stimulated correction posttest, this laboratory study examined whether extensive and spontaneous recasts provided during small-group work were beneficial to adult L2 learners. Participants were 26 ESL learners, who were divided into seven small groups (3-5 students per group, and each group participated in an oral activity with a teacher. During the activity, the students received incidental and extensive recasts to half of their errors; the other half of their errors received no feedback. Students’ ability to detect and correct their errors in the three types of episodes was assessed using two types of tests: a stimulated correction test (a video-based computer test and a written test. Students’ reaction time on the error detection portion of the stimulated correction task was also measured. The results showed that students were able to detect more errors in error+recast (error followed by the provision of a recast episodes than in error-recast (error and no recast provided episodes (though this difference did not reach statistical significance. They were also able to successfully and partially successfully correct more errors in error+recast episodes than in error-recast episodes, and this difference was statistically significant on the written test. The reaction time results also point towards a benefit from recasts, as students were able to complete the task (slightly more quickly for error+recast episodes than for error-recast episodes.
DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION
Directory of Open Access Journals (Sweden)
E. B. Romanova
2016-03-01
Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different
Macroscopic and non-linear quantum games
International Nuclear Information System (INIS)
Aerts, D.; D'Hooghe, A.; Posiewnik, A.; Pykacz, J.
2005-01-01
Full text: We consider two models of quantum games. The first one is Marinatto and Weber's 'restricted' quantum game in which only the identity and the spin-flip operators are used. We show that this quantum game allows macroscopic mechanistic realization with the use of a version of the 'macroscopic quantum machine' described by Aerts already in 1980s. In the second model we use non-linear quantum state transformations which operate on points of spin-1/2 on the Bloch sphere and which can be used to distinguish optimally between two non-orthogonal states. We show that efficiency of these non-linear strategies out-perform any linear ones. Some hints on the possible theory of non-linear quantum games are given. (author)
Incident reports--correcting processes and reducing errors.
Dunn, Debra
2003-08-01
Although it may be human nature to make mistakes, it also is human nature to create solutions, identify alternatives, and meet future challenges. This article describes systems approaches to assessing the ways in which an organization operates and explains the types of failures that cause errors. The steps that guide managers in adapting an incident reporting system that incorporates continuous quality improvement are identified.
ACE: accurate correction of errors using K-mer tries
Sheikhizadeh Anari, S.; Ridder, de D.
2015-01-01
The quality of high-throughput next-generation sequencing data significantly influences the performance and memory consumption of assembly and mapping algorithms. The most ubiquitous platform, Illumina, mainly suffers from substitution errors. We have developed a tool, ACE, based on K-mer tries to
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
error correction in a communicative language teaching framework
African Journals Online (AJOL)
The problem that I would like to address in this paper, is, I am convinced, experienced by all second language teacher trainers and teachers. In presenting the Communicative Approach and the theory on which it is founded, the data firmly guide them towards the conclusion that error is not the bAte noire of language.
Non-Linear Acoustic Concealed Weapons Detector
2006-05-01
National Institute of Justice, Corrections Today, American Correctional Association, July 2001 4. Spawar systems center, Correctional officer duress...Corrections Today, American Correctional Association, July 2001 12. Spawar systems center, Correctional officer duress system – Selection Guide, Document
Adaptive ensemble Kalman filtering of non-linear systems
Directory of Open Access Journals (Sweden)
Tyrus Berry
2013-07-01
Full Text Available A necessary ingredient of an ensemble Kalman filter (EnKF is covariance inflation, used to control filter divergence and compensate for model error. There is an on-going search for inflation tunings that can be learned adaptively. Early in the development of Kalman filtering, Mehra (1970, 1972 enabled adaptivity in the context of linear dynamics with white noise model errors by showing how to estimate the model error and observation covariances. We propose an adaptive scheme, based on lifting Mehra's idea to the non-linear case, that recovers the model error and observation noise covariances in simple cases, and in more complicated cases, results in a natural additive inflation that improves state estimation. It can be incorporated into non-linear filters such as the extended Kalman filter (EKF, the EnKF and their localised versions. We test the adaptive EnKF on a 40-dimensional Lorenz96 model and show the significant improvements in state estimation that are possible. We also discuss the extent to which such an adaptive filter can compensate for model error, and demonstrate the use of localisation to reduce ensemble sizes for large problems.
Non-linear Post Processing Image Enhancement
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Directory of Open Access Journals (Sweden)
Karyn Heavner
2015-11-01
Full Text Available We sought to determine the potential effects of pooling on power, false positive rate (FPR, and bias of the estimated associations between hypothetical environmental exposures and dichotomous autism spectrum disorders (ASD status. Simulated birth cohorts in which ASD outcome was assumed to have been ascertained with uncertainty were created. We investigated the impact on the power of the analysis (using logistic regression to detect true associations with exposure (X1 and the FPR for a non-causal correlate of exposure (X2, r = 0.7 for a dichotomized ASD measure when the pool size, sample size, degree of measurement error variance in exposure, strength of the true association, and shape of the exposure-response curve varied. We found that there was minimal change (bias in the measures of association for the main effect (X1. There is some loss of power but there is less chance of detecting a false positive result for pooled compared to individual level models. The number of pools had more effect on the power and FPR than the overall sample size. This study supports the use of pooling to reduce laboratory costs while maintaining statistical efficiency in scenarios similar to the simulated prospective risk-enriched ASD cohort.
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Non-linear Loudspeaker Unit Modelling
DEFF Research Database (Denmark)
Pedersen, Bo Rohde; Agerkvist, Finn
2008-01-01
Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of three...
Controller Reconfiguration for non-linear systems
Kanev, S.K.; Verhaegen, M.H.G.
2000-01-01
This paper outlines an algorithm for controller reconfiguration for non-linear systems, based on a combination of a multiple model estimator and a generalized predictive controller. A set of models is constructed, each corresponding to a different operating condition of the system. The interacting
Pharmaceutical applications of non-linear imaging
Strachan, Clare J.; Windbergs, Maike; Offerhaus, Herman L.
2011-01-01
Non-linear optics encompasses a range of optical phenomena, including two- and three-photon fluorescence, second harmonic generation (SHG), sum frequency generation (SFG), difference frequency generation (DFG), third harmonic generation (THG), coherent anti-Stokes Raman scattering (CARS), and
Using ridge regression in systematic pointing error corrections
Guiar, C. N.
1988-01-01
A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.
New laws of practice for learning and error correction
International Nuclear Information System (INIS)
Duffey, R.B.
2008-01-01
Relevant to design, operation and safety is the determination of risk and error rates. We provide the detailed comparison of our new learning and statistical theories for system outcome data with the traditional analysis of the learning curves obtained from tests with individual human subjects. The results provide a consistent predictive basis for the learning trends emerging all the way from timescales of many years in large technological system outcomes to actions that occur in about a tenth of a second for individual human decisions. Hence, we demonstrate both the common influence of the human element and the importance of statistical reasoning and analysis. (author)
5 CFR 894.105 - Who may correct an error in my enrollment?
2010-01-01
... correction of an administrative error if it receives evidence that it would be against equity (fairness) and... periods of the retroactive coverage. These premiums will not be on a pre-tax basis (they are not subject...
Machine-learning-assisted correction of correlated qubit errors in a topological code
Directory of Open Access Journals (Sweden)
Paul Baireuther
2018-01-01
Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.
Highly accurate fluorogenic DNA sequencing with information theory-based error correction.
Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi
2017-12-01
Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.
Using Oral Error Correction in Storytelling to Improve Students' Speaking Achievement
Sumantri, Arya Yoga Swara; Sudirman, Sudirman; Supriyadi, Deddy
2015-01-01
Tujuan penelitian ini adalah untuk menemukan perbedaan signifikan pada prestasi berbicara siswa setelah diajarkan dengan menggunakan teknik oral error correction, menemukan apakah oral error correction dapat meningkatkan kemampuan berbicara siswa pada aspek kosa kata, kelancaran, pemahaman, pelafalan, dan tata bahasa, serta proses belajar mengajar. Penelitian ini menggunakan metode kuantitatif. Sampel dipilih secara khusus berdasarkan tingginya nilai bahasa inggris yaitu kelas XI IPA1 di SMAN...
Effects and Correction of Closed Orbit Magnet Errors in the SNS Ring
Energy Technology Data Exchange (ETDEWEB)
Bunch, S.C.; Holmes, J.
2004-01-01
We consider the effect and correction of three types of orbit errors in SNS: quadrupole displacement errors, dipole displacement errors, and dipole field errors. Using the ORBIT beam dynamics code, we focus on orbit deflection of a standard pencil beam and on beam losses in a high intensity injection simulation. We study the correction of these orbit errors using the proposed system of 88 (44 horizontal and 44 vertical) ring beam position monitors (BPMs) and 52 (24 horizontal and 28 vertical) dipole corrector magnets. Correction is carried out numerically by adjusting the kick strengths of the dipole corrector magnets to minimize the sum of the squares of the BPM signals for the pencil beam. In addition to using the exact BPM signals as input to the correction algorithm, we also consider the effect of random BPM signal errors. For all three types of error and for perturbations of individual magnets, the correction algorithm always chooses the three-bump method to localize the orbit displacement to the region between the magnet and its adjacent correctors. The values of the BPM signals resulting from specified settings of the dipole corrector kick strengths can be used to set up the orbit response matrix, which can then be applied to the correction in the limit that the signals from the separate errors add linearly. When high intensity calculations are carried out to study beam losses, it is seen that the SNS orbit correction system, even with BPM uncertainties, is sufficient to correct losses to less than 10-4 in nearly all cases, even those for which uncorrected losses constitute a large portion of the beam.
Directory of Open Access Journals (Sweden)
Adam Gąska
2013-12-01
Full Text Available LaserTracer (LT systems are the most sophisticated and accurate laser tracking devices. They are mainly used for correction of geometrical errors of machine tools and coordinate measuring machines. This process is about four times faster than standard methods based on usage of laser interferometers. The methodology of LaserTracer usage to correction of geometrical errors, including presentation of this system, multilateration method and software that was used are described in details in this paper.
High-speed parallel forward error correction for optical transport networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2010-01-01
This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology.......This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....
Efficient error correction for next-generation sequencing of viral amplicons
Directory of Open Access Journals (Sweden)
Skums Pavel
2012-06-01
Full Text Available Abstract Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i k-mer-based error correction (KEC and (ii empirical frequency threshold (ET. Both were compared to a previously published clustering algorithm (SHORAH, in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm
DeCesare, A; Secanell, M; Lagravère, M O; Carey, J
2013-01-01
The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.
CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD
Directory of Open Access Journals (Sweden)
BUSUIOCEANU STELIANA
2013-08-01
Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.
Nassaji, Hossein
2011-01-01
A substantial number of studies have examined the effects of grammar correction on second language (L2) written errors. However, most of the existing research has involved unidirectional written feedback. This classroom-based study examined the effects of oral negotiation in addressing L2 written errors. Data were collected in two intermediate…
Non-linear soil-structure interaction
International Nuclear Information System (INIS)
Wolf, J.P.
1984-01-01
The basic equation of motion to analyse the interaction of a non-linear structure and an irregular soil with the linear unbounded soil is formulated in the time domain. The contribution of the unbounded soil involves convolution integrals of the dynamic-stiffness coefficients in the time domain and the corresponding motions. As another possibility, a flexibility formulation fot the contribution of the unbounded soil using the dynamic-flexibility coefficients in the time domain, together with the direct-stiffness method for the structure and the irregular soil can be applied. As an example of a non-linear soil-structure-interaction analysis, the partial uplift of the basemat of a structure is examined. (Author) [pt
Non-Linear Dynamics and Fundamental Interactions
Khanna, Faqir
2006-01-01
The book is directed to researchers and graduate students pursuing an advanced degree. It provides details of techniques directed towards solving problems in non-linear dynamics and chos that are, in general, not amenable to a perturbative treatment. The consideration of fundamental interactions is a prime example where non-perturbative techniques are needed. Extension of these techniques to finite temperature problems is considered. At present these ideas are primarily used in a perturbative context. However, non-perturbative techniques have been considered in some specific cases. Experts in the field on non-linear dynamics and chaos and fundamental interactions elaborate the techniques and provide a critical look at the present status and explore future directions that may be fruitful. The text of the main talks will be very useful to young graduate students who are starting their studies in these areas.
Oflazer, Kemal
1995-01-01
Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition...
Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors
Francois-Éric Racicot; Raymond Théoret; Alain Coen
2006-01-01
In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.
Bu, Ting-ting; Wang, Xian-hua; Ye, Han-han; Jiang, Xin-hua
2016-01-01
High precision retrieval of atmospheric CH4 is influenced by a variety of factors. The uncertainties of ground properties and atmospheric conditions are important factors, such as surface reflectance, temperature profile, humidity profile and pressure profile. Surface reflectance is affected by many factors so that it is difficult to get the precise value. The uncertainty of surface reflectance will cause large error to retrieval result. The uncertainties of temperature profile, humidity profile and pressure profile are also important sources of retrieval error and they will cause unavoidable systematic error. This error is hard to eliminate only using CH4 band. In this paper, ratio spectrometry method and CO2 band correction method are proposed to reduce the error caused by these factors. Ratio spectrometry method can decrease the effect of surface reflectance in CH4 retrieval by converting absolute radiance spectrometry into ratio spectrometry. CO2 band correction method converts column amounts of CH4 into column averaged mixing ratio by using CO2 1.61 μm band and it can correct the systematic error caused by temperature profile, humidity profile and pressure profile. The combination of these two correction methods will decrease the effect caused by surface reflectance, temperature profile, humidity profile and pressure profile at the same time and reduce the retrieval error. GOSAT data were used to retrieve atmospheric CH4 to test and validate the two correction methods. The results showed that CH4 column averaged mixing ratio retrieved after correction was close to GOSAT Level2 product and the retrieval precision was up to -0.24%. The studies suggest that the error of CH4 retrieval caused by the uncertainties of ground properties and atmospheric conditions can be significantly reduced and the retrieval precision can be highly improved by using ratio spectrometry method and CO2 hand correction method.
Preferences of ELT Learners in the Correction of Oral Vocabulary and Pronunciation Errors
Ustaci, Hale Yayla; Ok, Selami
2014-01-01
Vocabulary is an essential component of language teaching and learning process, and correct pronunciation of lexical items is an ultimate goal for language instructors in ELT programs. Apart from how lexical items should be taught, the way teachers correct oral vocabulary errors as well as those of pronunciation in line with the preferences of…
Did I say dog or cat? A study of semantic error detection and correction in children.
Hanley, J Richard; Cortis, Cathleen; Budd, Mary-Jane; Nozari, Nazbanou
2016-02-01
Although naturalistic studies of spontaneous speech suggest that young children can monitor their speech, the mechanisms for detection and correction of speech errors in children are not well understood. In particular, there is little research on monitoring semantic errors in this population. This study provides a systematic investigation of detection and correction of semantic errors in children between the ages of 5 and 8years as they produced sentences to describe simple visual events involving nine highly familiar animals (the moving animals task). Results showed that older children made fewer errors and corrected a larger proportion of the errors that they made than younger children. We then tested the prediction of a production-based account of error monitoring that the strength of the language production system, and specifically its semantic-lexical component, should be correlated with the ability to detect and repair semantic errors. Strength of semantic-lexical mapping, as well as lexical-phonological mapping, was estimated individually for children by fitting their error patterns, obtained from an independent picture-naming task, to a computational model of language production. Children's picture-naming performance was predictive of their ability to monitor their semantic errors above and beyond age. This relationship was specific to the strength of the semantic-lexical part of the system, as predicted by the production-based monitor. Copyright © 2015 Elsevier Inc. All rights reserved.
Ma, S.; Quan, C.; Zhu, R.; Tay, C. J.
2012-08-01
Digital sinusoidal phase-shifting fringe projection profilometry (DSPFPP) is a powerful tool to reconstruct three-dimensional (3D) surface of diffuse objects. However, a highly accurate profile is often hindered by nonlinear response, color crosstalk and imbalance of a pair of digital projector and CCD/CMOS camera. In this paper, several phase error correction methods, such as Look-Up-Table (LUT) compensation, intensity correction, gamma correction, LUT-based hybrid method and blind phase error suppression for gray and color-encoded DSPFPP are described. Experimental results are also demonstrated to evaluate the effectiveness of each method.
Correction of errors in tandem mass spectrum extraction enhances phosphopeptide identification.
Hao, Piliang; Ren, Yan; Tam, James P; Sze, Siu Kwan
2013-12-06
The tandem mass spectrum extraction of phosphopeptides is more difficult and error-prone than that of unmodified peptides due to their lower abundance, lower ionization efficiency, the cofragmentation with other high-abundance peptides, and the use of MS(3) on MS(2) fragments with neutral losses. However, there are still no established methods to evaluate its correctness. Here we propose to identify and correct these errors via the combinatorial use of multiple spectrum extraction tools. We evaluated five free and two commercial extraction tools using Mascot and phosphoproteomics raw data from LTQ FT Ultra, in which RawXtract 1.9.9.2 identified the highest number of unique phosphopeptides (peptide expectation value exporting MS/MS fragments. We then corrected the errors by selecting the best extracted MGF file for each spectrum among the three tools for another database search. With the errors corrected, it results in the 22.4 and 12.2% increase in spectrum matches and unique peptide identification, respectively, compared with the best single method. Correction of errors in spectrum extraction improves both the sensitivity and confidence of phosphopeptide identification. Data analysis on nonphosphopeptide spectra indicates that this strategy applies to unmodified peptides as well. The identification of errors in spectrum extraction will promote the improvement of spectrum extraction tools in future.
Directory of Open Access Journals (Sweden)
Stamatović Dragana
2007-01-01
. Correction of PEF values obtained by peak flowmeters with traditional Wright scale shows a possibility of overtreatment in younger or short stature children and undertreatment in older or taller ones if we use old type of metres. The correction of peak flowmeter for non-linear error is a prerequisite in the application of asthma guidelines in PEF measurements. .
Useful tools for non-linear systems: Several non-linear integral inequalities
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Vaezpour, M. S.
2013-01-01
Roč. 49, č. 1 (2013), s. 73-80 ISSN 0950-7051 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : Monotone measure * Comonotone functions * Integral inequalities * Universal integral Subject RIV: BA - General Mathematics Impact factor: 3.058, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-useful tools for non-linear systems several non-linear integral inequalities.pdf
DEFF Research Database (Denmark)
Ashraf, Bilal; Janss, Luc; Jensen, Just
sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...
Local concurrent error detection and correction in data structures using virtual backpointers
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
Error-correction coding and decoding bounds, codes, decoders, analysis and applications
Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak
2017-01-01
This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...
Ferrite core non-linearity in coils for magnetic neurostimulation.
RamRakhyani, Anil Kumar; Lazzi, Gianluca
2014-10-01
The need to correctly predict the voltage across terminals of mm-sized coils, with ferrite core, to be employed for magnetic stimulation of the peripheral neural system is the motivation for this work. In such applications, which rely on a capacitive discharge on the coil to realise a transient voltage curve of duration and strength suitable for neural stimulation, the correct modelling of the non-linearity of the ferrite core is critical. A demonstration of how a finite-difference model of the considered coils, which include a model of the current-controlled inductance in the coil, can be used to correctly predict the time-domain voltage waveforms across the terminals of a test coil is presented. Five coils of different dimensions, loaded with ferrite cores, have been fabricated and tested: the measured magnitude and width of the induced pulse are within 10% of simulated values.
DEFF Research Database (Denmark)
Hu, Hao; Andersen, Jakob Dahl; Rasmussen, Anders
2013-01-01
We build a forward error correction (FEC) module and implement it in an optical signal processing experiment. The experiment consists of two cascaded nonlinear optical signal processes, 160 Gbit/s all optical wavelength conversion based on the cross phase modulation (XPM) in a silicon nanowire...... and subsequent 160 Gbit/s-to-10 Gbit/s demultiplexing in a highly nonlinear fiber (HNLF). The XPM based all optical wavelength conversion in silicon is achieved by off-center filtering the red shifted sideband on the CW probe. We thoroughly demonstrate and verify that the FEC code operates correctly after...... the optical signal processing, yielding truly error-free 150 Gbit/s (excl. overhead) optically signal processed data after the two cascaded nonlinear processes. © 2013 Optical Society of America....
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
Directory of Open Access Journals (Sweden)
Marios H. Michael
2016-07-01
Full Text Available We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These “binomial quantum codes” are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to “cat codes” based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging.
Li, Bo; Liu, Falin; Zhou, Chongbin; Lv, Yuanhao; Hu, Jingqiu
2017-03-17
Defocus of the reconstructed image of synthetic aperture radar (SAR) occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS) radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D) phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM) waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D) phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.
Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging
Directory of Open Access Journals (Sweden)
Bo Li
2017-03-01
Full Text Available Defocus of the reconstructed image of synthetic aperture radar (SAR occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.
G W self-screening error and its correction using a local density functional
Wetherell, J.; Hodgson, M. J. P.; Godby, R. W.
2018-03-01
The self-screening error in electronic structure theory is the part of the self-interaction error that would remain within the G W approximation if the exact dynamically screened Coulomb interaction W were used, causing each electron to artificially screen its own presence. This introduces error into the electron density and ionization potential. We propose a simple, computationally efficient correction to G W calculations in the form of a local density functional, obtained using a series of finite training systems; in tests, this eliminates the self-screening errors in the electron density and ionization potential.
Non-linearities in Holocene floodplain sediment storage
Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten
2013-04-01
that a strong multifractality is present in the scaling relationship between sediment storage and catchment area, depending on geomorphic landscape properties. Extrapolation of data from one spatial scale to another inevitably leads to large errors: when only the data of the upper floodplains are considered, a regression analysis results in an overestimation of total floodplain deposition for the entire catchment of circa 115%. This example demonstrates multifractality and related non-linearity in scaling relationships, which influences extrapolations beyond the initial range of measurements. These different examples indicate how traditional extrapolation techniques and assumptions in sediment budget studies can be challenged by field data, further complicating our understanding of these systems. Although simplifications are often necessary when working on large spatial scale, such non-linearities may form challenges for a better understanding of system behavior.
Non-linear control algorithms for an unmanned surface vehicle
Sharma, SK; Sutton, R; Motwani, A; Annamalai, A
2014-01-01
Although intrinsically marine craft are known to exhibit non-linear dynamic characteristics, modern marine autopilot system designs continue to be developed based on both linear and non-linear control approaches. This article evaluates two novel non-linear autopilot designs based on non-linear local control network and non-linear model predictive control approaches to establish their effectiveness in terms of control activity expenditure, power consumption and mission duration length under si...
Non-linear dynamics in Parkinsonism
Directory of Open Access Journals (Sweden)
Olivier eDarbin
2013-12-01
Full Text Available Over the last 30 years, the functions (and dysfunctions of the sensory-motor circuitry have been mostly conceptualized using linear modelizations which have resulted in two main models: the "rate hypothesis" and the "oscillatory hypothesis". In these two models, the basal ganglia data stream is envisaged as a random temporal combination of independent simple patterns issued from its probability distribution of interval interspikes or its spectrum of frequencies respectively.More recently, non-linear analyses have been introduced in the modelization of motor circuitry activities, and they have provided evidences that complex temporal organizations exist in basal ganglia neuronal activities. Regarding movement disorders, these complex temporal organizations in the basal ganglia data stream differ between conditions (i.e. parkinsonism, dyskinesia, healthy control and are responsive to treatments (i.e. L-DOPA,DBS. A body of evidence has reported that basal ganglia neuronal entropy (a marker for complexity/irregularity in time series is higher in hypokinetic state. In line with these findings, an entropy-based model has been recently formulated to introduce basal ganglia entropy as a marker for the alteration of motor processing and a factor of motor inhibition. Importantly, non-linear features have also been identified as a marker of condition and/or treatment effects in brain global signals (EEG, muscular activities (EMG or kinetic of motor symptoms (tremor, gait of patients with movement disorders. It is therefore warranted that the non-linear dynamics of motor circuitry will contribute to a better understanding of the neuronal dysfunctions underlying the spectrum of parkinsonian motor symptoms including tremor, rigidity and hypokinesia.
Hasni, Nesrine; Ben Hamida, Emira; Ben Jeddou, Khouloud; Ben Hamida, Sarra; Ayadi, Imene; Ouahchi, Zeineb; Marrakchi, Zahra
2016-12-01
The medication iatrogenic risk is quite unevaluated in neonatology Objective: Assessment of errors that occurred during the preparation and administration of injectable medicines in a neonatal unit in order to implement corrective actions to reduce the occurrence of these errors. A prospective, observational study was performed in a neonatal unit over a period of one month. The practice of preparing and administering injectable medications were identified through a standardized data collection form. These practices were compared with summaries of the characteristics of each product (RCP) and the bibliography. One hundred preparations were observed of 13 different drugs. 85 errors during preparations and administration steps were detected. These errors were divided into preparation errors in 59% of cases such as changing the dilution protocol (32%), the use of bad solvent (11%) and administration errors in 41% of cases as errors timing of administration (18%) or omission of administration (9%). This study showed a high rate of errors during stages of preparation and administration of injectable drugs. In order to optimize the care of newborns and reduce the risk of medication errors, corrective actions have been implemented through the establishment of a quality assurance system which consisted of the development of injectable drugs preparation procedures, the introduction of a labeling system and staff training.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors
Directory of Open Access Journals (Sweden)
Pham Thuy Dung
2016-12-01
Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners
NON-LINEAR MODEL PREDICTIVE CONTROL STRATEGIES FOR PROCESS PLANTS USING SOFT COMPUTING APPROACHES
Owa, Kayode Olayemi
2014-01-01
The developments of advanced non-linear control strategies have attracted a considerable research interests over the past decades especially in process control. Rather than an absolute reliance on mathematical models of process plants which often brings discrepancies especially owing to design errors and equipment degradation, non-linear models are however required because they provide improved prediction capabilities but they are very difficult to derive. In addition, the derivation of the g...
SimCommSys: taking the errors out of error-correcting code simulations
Directory of Open Access Journals (Sweden)
Johann A. Briffa
2014-06-01
Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.
Asassfeh, Sahail M.
2013-01-01
Corrective feedback (CF), the implicit or explicit information learners receive indicating a gap between their current, compared to the desired, performance, has been an area of interest for EFL researchers during the last few decades. This study, conducted on 139 English-major prospective EFL teachers, assessed the impact of two CF types…
[Influence of measurement errors of radiation in NIR bands on water atmospheric correction].
Xu, Hua; Li, Zheng-Qiang; Yin, Qiu; Gu, Xing-Fa
2013-07-01
For standard algorithm of atmospheric correction of water, the ratio of two near-infrared (NIR) channels is selected to determine an aerosol model, and then aerosol radiation at every wavelength is accordingly estimated by extrapolation. The uncertainty of radiation measurement in NIR bands will play important part in the accuracy of water-leaving reflectance. In the present research, erroneous expressions were derived mathematically in order to see the error propagation from NIR bands. The errors distribution of water-leaving reflectance was thoroughly studied. The results show that the bigger the errors of measurement are made, the bigger the errors of water-leaving reflectance are retrieved, with sometimes the NIR band errors canceling out. Moreover, the higher the values of aerosol optical depth or the more the component of small particles in aerosol, the bigger the errors that appear during retrieval.
Structured methods for identifying and correcting potential human errors in aviation operations
Energy Technology Data Exchange (ETDEWEB)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risks of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).
The linear-non-linear frontier for the Goldstone Higgs
International Nuclear Information System (INIS)
Gavela, M.B.; Saa, S.; Kanshin, K.; Machado, P.A.N.
2016-01-01
The minimal SO(5)/SO(4) σ-model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone-boson ancestry. Varying the σ mass allows one to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry-breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy-fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators. (orig.)
The linear-non-linear frontier for the Goldstone Higgs
Energy Technology Data Exchange (ETDEWEB)
Gavela, M.B.; Saa, S. [IFT-UAM/CSIC, Universidad Autonoma de Madrid, Departamento de Fisica Teorica y Instituto de Fisica Teorica, Madrid (Spain); Kanshin, K. [Universita di Padova, Dipartimento di Fisica e Astronomia ' G. Galilei' , Padua (Italy); INFN, Padova (Italy); Machado, P.A.N. [IFT-UAM/CSIC, Universidad Autonoma de Madrid, Departamento de Fisica Teorica y Instituto de Fisica Teorica, Madrid (Spain); Fermi National Accelerator Laboratory, Theoretical Physics Department, Batavia, IL (United States)
2016-12-15
The minimal SO(5)/SO(4) σ-model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone-boson ancestry. Varying the σ mass allows one to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry-breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy-fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators. (orig.)
Sandwich corrected standard errors in family-based genome-wide association studies.
Minică, Camelia C; Dolan, Conor V; Kampert, Maarten M D; Boomsma, Dorret I; Vink, Jacqueline M
2015-03-01
Given the availability of genotype and phenotype data collected in family members, the question arises which estimator ensures the most optimal use of such data in genome-wide scans. Using simulations, we compared the Unweighted Least Squares (ULS) and Maximum Likelihood (ML) procedures. The former is implemented in Plink and uses a sandwich correction to correct the standard errors for model misspecification of ignoring the clustering. The latter is implemented by fast linear mixed procedures and models explicitly the familial resemblance. However, as it commits to a background model limited to additive genetic and unshared environmental effects, it employs a misspecified model for traits with a shared environmental component. We considered the performance of the two procedures in terms of type I and type II error rates, with correct and incorrect model specification in ML. For traits characterized by moderate to large familial resemblance, using an ML procedure with a correctly specified model for the conditional familial covariance matrix should be the strategy of choice. The potential loss in power encountered by the sandwich corrected ULS procedure does not outweigh its computational convenience. Furthermore, the ML procedure was quite robust under model misspecification in the simulated settings and appreciably more powerful than the sandwich corrected ULS procedure. However, to correct for the effects of model misspecification in ML in circumstances other than those considered here, we propose to use a sandwich correction. We show that the sandwich correction can be formulated in terms of the fast ML method.
ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES
Directory of Open Access Journals (Sweden)
Maria Corazon Saturnina A Castro
2017-10-01
Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices. This paper poses the major problem: How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed. Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.
Biometrics encryption combining palmprint with two-layer error correction codes
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
Is a Genome a Codeword of an Error-Correcting Code?
Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo
2012-01-01
Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495
Full-Diversity Space-Time Error Correcting Codes with Low-Complexity Receivers
Directory of Open Access Journals (Sweden)
Hassan MohamadSayed
2011-01-01
Full Text Available We propose an explicit construction of full-diversity space-time block codes, under the constraint of an error correction capability. Furthermore, these codes are constructed in order to be suitable for a serial concatenation with an outer linear forward error correcting (FEC code. We apply the binary rank criterion, and we use the threaded layering technique and an inner linear FEC code to define a space-time error-correcting code. When serially concatenated with an outer linear FEC code, a product code can be built at the receiver, and adapted iterative receiver structures can be applied. An optimized hybrid structure mixing MMSE turbo equalization and turbo product code decoding is proposed. It yields reduced complexity and enhanced performance compared to previous existing structures.
Is a genome a codeword of an error-correcting code?
Directory of Open Access Journals (Sweden)
Luzinete C B Faria
Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.
Directory of Open Access Journals (Sweden)
Hossein Nassaji
2011-10-01
Full Text Available A substantial number of studies have examined the effects of grammar correction on second language (L2 written errors. However, most of the existing research has involved unidirectional written feedback. This classroom-based study examined the effects of oral negotiation in addressing L2 written errors. Data were collected in two intermediate adult English as a second language classes. Three types of feedback were compared: nonnegotiated direct reformulation, feedback with limited negotiation (i.e., prompt + reformulation and feedback with negotiation. The linguistic targets chosen were the two most common grammatical errors in English: articles and prepositions. The effects of feedback were measured by means of learner-specific error identification/correction tasks administered three days, and again ten days, after the treatment. The results showed an overall advantage for feedback that involved negotiation. However, a comparison of data per error types showed that the differential effects of feedback types were mainly apparent for article errors rather than preposition errors. These results suggest that while negotiated feedback may play an important role in addressing L2 written errors, the degree of its effects may differ for different linguistic targets.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Extending the lifetime of a quantum bit with error correction in superconducting circuits
Ofek, Nissim; Petrenko, Andrei; Heeres, Reinier; Reinhold, Philip; Leghtas, Zaki; Vlastakis, Brian; Liu, Yehan; Frunzio, Luigi; Girvin, S. M.; Jiang, L.; Mirrahimi, Mazyar; Devoret, M. H.; Schoelkopf, R. J.
2016-08-01
Quantum error correction (QEC) can overcome the errors experienced by qubits and is therefore an essential component of a future quantum computer. To implement QEC, a qubit is redundantly encoded in a higher-dimensional space using quantum states with carefully tailored symmetry properties. Projective measurements of these parity-type observables provide error syndrome information, with which errors can be corrected via simple operations. The ‘break-even’ point of QEC—at which the lifetime of a qubit exceeds the lifetime of the constituents of the system—has so far remained out of reach. Although previous works have demonstrated elements of QEC, they primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to preserve a qubit over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of Schrödinger-cat states of a superconducting resonator. We implement a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode and correct. As measured by full process tomography, without any post-selection, the corrected qubit lifetime is 320 microseconds, which is longer than the lifetime of any of the parts of the system: 20 times longer than the lifetime of the transmon, about 2.2 times longer than the lifetime of an uncorrected logical encoding and about 1.1 longer than the lifetime of the best physical qubit (the |0>f and |1>f Fock states of the resonator). Our results illustrate the benefit of using hardware-efficient qubit encodings rather than traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming basic concepts to exploring the metrics that drive system performance and the challenges in realizing a fault-tolerant system.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies.
Clark, Kevin B
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Directory of Open Access Journals (Sweden)
Kevin Bradley Clark
2013-08-01
Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in
Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads
Song, Li; Florea, Liliana
2015-01-01
Background Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. Findings We developed a k-m...
Short-term wind power combined forecasting based on error forecast correction
International Nuclear Information System (INIS)
Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng
2016-01-01
Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...
A Phillips curve interpretation of error-correction models of the wage and price dynamics
DEFF Research Database (Denmark)
Harck, Søren H.
2009-01-01
This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error......-correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...
Douanla Tayo, Lionel; Abomo Fouda, Marcel Olivier
2015-01-01
This study aims at assessing the effect of government spending in education on economic growth in Cameroon over the period 1980-2012 using a vector error correction model. The estimated results show that these expenditures had a significant and positive impact on economic growth both in short and long run. The estimated error correction model shows that an increase of 1% of the growth rate of private gross fixed capital formation and government education spending led to increases of 5.03% a...
Environment-assisted error correction of single-qubit phase damping
International Nuclear Information System (INIS)
Trendelkamp-Schroer, Benjamin; Helm, Julius; Strunz, Walter T.
2011-01-01
Open quantum system dynamics of random unitary type may in principle be fully undone. Closely following the scheme of environment-assisted error correction proposed by Gregoratti and Werner [J. Mod. Opt. 50, 915 (2003)], we explicitly carry out all steps needed to invert a phase-damping error on a single qubit. Furthermore, we extend the scheme to a mixed-state environment. Surprisingly, we find cases for which the uncorrected state is closer to the desired state than any of the corrected ones.
Batra, Rishi; Wolbach-Lowes, Jane; Swindells, Susan; Scarsi, Kimberly K; Podany, Anthony T; Sayles, Harlan; Sandkovsky, Uriel
2015-01-01
Previous review of admissions from 2009-2011 in our institution found a 35.1% error rate in antiretroviral (ART) prescribing, with 55% of errors never corrected. Subsequently, our institution implemented a unified electronic medical record (EMR) and we developed a medication reconciliation process with an HIV pharmacist. We report the impact of the EMR on incidence of errors and of the pharmacist intervention on time to error correction. Prospective medical record review of HIV-infected patients hospitalized for >24 h between 9 March 2013 and 10 March 2014. An HIV pharmacist reconciled outpatient ART prescriptions with inpatient orders within 24 h of admission. Prescribing errors were classified and time to error correction recorded. Error rates and time to correction were compared to historical data using relative risks (RR) and logistic regression models. 43 medication errors were identified in 31/186 admissions (16.7%). The incidence of errors decreased significantly after EMR (RR 0.47, 95% CI 0.34, 0.67). Logistic regression adjusting for gender and race/ethnicity found that errors were 61% less likely to occur using the EMR (95% CI 40%, 75%; Perrors were corrected, 65% within 24 h and 81.4% within 48 h. Compared to historical data where only 31% of errors were corrected in errors were 9.4× more likely to be corrected within 24 h with HIV pharmacist intervention (Perror rate by more than 50% but despite this, ART errors remained common. HIV pharmacist intervention was key to timely error correction.
A power supply error correction method for single-ended digital audio class D amplifiers
Yu, Zeqi; Wang, Fengqin; Fan, Yangyu
2016-12-01
In single-ended digital audio class D amplifiers (CDAs), the errors caused by power supply noise in the power stages degrade the output performance seriously. In this article, a novel power supply error correction method is proposed. This method introduces the power supply noise of the power stage into the digital signal processing block and builds a power supply error corrector between the interpolation filter and the uniform-sampling pulse width modulation (UPWM) lineariser to pre-correct the power supply error in the single-ended digital audio CDA. The theoretical analysis and implementation of the method are also presented. To verify the effectiveness of the method, a two-channel single-ended digital audio CDA with different power supply error correction methods is designed, simulated, implemented and tested. The simulation and test results obtained show that the method can greatly reduce the error caused by the power supply noise with low hardware cost, and that the CDA with the proposed method can achieve a total harmonic distortion + noise (THD + N) of 0.058% for a -3 dBFS, 1 kHz input when a 55 V linear unregulated direct current (DC) power supply (with the -51 dBFS, 100 Hz power supply noise) is used in the power stages.
Non-linear neutron star oscillations viewed as deviations from an equilibrium state
International Nuclear Information System (INIS)
Sperhake, U
2002-01-01
A numerical technique is presented which facilitates the evolution of non-linear neutron star oscillations with a high accuracy essentially independent of the oscillation amplitude. We apply this technique to radial neutron star oscillations in a Lagrangian formulation and demonstrate the superior performance of the new scheme compared with 'conventional' techniques. The key feature of our approach is to describe the evolution in terms of deviations from an equilibrium configuration. In contrast to standard perturbation analysis we keep all higher order terms in the evolution equations and thus obtain a fully non-linear description. The advantage of our scheme lies in the elimination of background terms from the equations and the associated numerical errors. The improvements thus achieved will be particularly significant in the study of mildly non-linear effects where the amplitude of the dynamic signal is small compared with the equilibrium values but large enough to warrant non-linear effects. We apply the new technique to the study of non-linear coupling of Eigenmodes and non-linear effects in the oscillations of marginally stable neutron stars. We find non-linear effects in low amplitude oscillations to be particularly pronounced in the range of modes with vanishing frequency which typically mark the onset of instability. (author)
Directory of Open Access Journals (Sweden)
Leila Hajian
2014-09-01
Full Text Available Written error correction may be the most widely used method for responding to student writing. Although there are various studies investigating error correction, there are little researches considering teachers’ and students’ preferences towards written error correction. The present study investigates students’ and teachers’ preferences and attitudes towards correction of classroom written errors in Iranian EFL context by using questionnaire. In this study, 80 students and 12 teachers were asked to answer the questionnaire. Then data were collected and analyzed by descriptive method. The findings from teachers and students show positive attitudes towards written error correction. Although the results of this study demonstrate teachers and students have some common preferences related to written error correction, there are some important discrepancies. For example; students prefer all error be corrected, but teachers prefer selecting some. However, students prefer teachers’ correction rather than peer or self-correction. This study considers a number of difficulties regarding students and teachers in written error correction processes with some suggestions. This study shows many teachers might believe written error correction takes a lot of time and effort to give comments. This study indicates many students does not have any problems in rewriting their paper after getting feedback. It might be one main positive point to improve their writing and it might give them self-confidence.
Non linear self consistency of microtearing modes
International Nuclear Information System (INIS)
Garbet, X.; Mourgues, F.; Samain, A.
1987-01-01
The self consistency of a microtearing turbulence is studied in non linear regimes where the ergodicity of the flux lines determines the electron response. The current which sustains the magnetic perturbation via the Ampere law results from the combines action of the radial electric field in the frame where the island chains are static and of the thermal electron diamagnetism. Numerical calculations show that at usual values of β pol in Tokamaks the turbulence can create a diffusion coefficient of order ν th p 2 i where p i is the ion larmor radius and ν th the electron ion collision frequency. On the other hand, collisionless regimes involving special profiles of each mode near the resonant surface seem possible
Image denoising using non linear diffusion tensors
International Nuclear Information System (INIS)
Benzarti, F.; Amiri, H.
2011-01-01
Image denoising is an important pre-processing step for many image analysis and computer vision system. It refers to the task of recovering a good estimate of the true image from a degraded observation without altering and changing useful structure in the image such as discontinuities and edges. In this paper, we propose a new approach for image denoising based on the combination of two non linear diffusion tensors. One allows diffusion along the orientation of greatest coherences, while the other allows diffusion along orthogonal directions. The idea is to track perfectly the local geometry of the degraded image and applying anisotropic diffusion mainly along the preferred structure direction. To illustrate the effective performance of our model, we present some experimental results on a test and real photographic color images.
Metrological Array of Cyber-Physical Systems. Part 11. Remote Error Correction of Measuring Channel
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-09-01
Full Text Available The multi-channel measuring instruments with both the classical structure and the isolated one is identified their errors major factors basing on general it metrological properties analysis. Limiting possibilities of the remote automatic method for additive and multiplicative errors correction of measuring instruments with help of code-control measures are studied. For on-site calibration of multi- channel measuring instruments, the portable voltage calibrators structures are suggested and their metrological properties while automatic errors adjusting are analysed. It was experimentally envisaged that unadjusted error value does not exceed ± 1 mV that satisfies most industrial applications. This has confirmed the main approval concerning the possibilities of remote errors self-adjustment as well multi- channel measuring instruments as calibration tools for proper verification.
Fringe order error in multifrequency fringe projection phase unwrapping: reason and correction.
Zhang, Chunwei; Zhao, Hong; Zhang, Lu
2015-11-10
A multifrequency fringe projection phase unwrapping algorithm (MFPPUA) is important to fringe projection profilometry, especially when a discontinuous object is measured. However, a fringe order error (FOE) may occur when MFPPUA is adopted. An FOE will result in error to the unwrapped phase. Although this kind of phase error does not spread, it brings error to the eventual 3D measurement results. Therefore, an FOE or its adverse influence should be obviated. In this paper, reasons for the occurrence of an FOE are theoretically analyzed and experimentally explored. Methods to correct the phase error caused by an FOE are proposed. Experimental results demonstrate that the proposed methods are valid in eliminating the adverse influence of an FOE.
How EFL students can use Google to correct their “untreatable” written errors
Directory of Open Access Journals (Sweden)
Luc Geiller
2014-09-01
Full Text Available This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several “untreatable” written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback leads to more grammatical accuracy. In her response to Truscott (1996, Ferris (1999 explains that it would be unreasonable to abolish correction given the present state of knowledge, and that further research needed to focus on which types of errors were more amenable to which types of error correction. In her attempt to respond more effectively to her students’ errors, she made the distinction between “treatable” and “untreatable” ones: the former occur in “a patterned, rule-governed way” and include problems with verb tense or form, subject-verb agreement, run-ons, noun endings, articles, pronouns, while the latter include a variety of lexical errors, problems with word order and sentence structure, including missing and unnecessary words. Substantial research on the use of search engines as a tool for L2 learners has been carried out suggesting that the web plays an important role in fostering language awareness and learner autonomy (e.g. Shei 2008a, 2008b; Conroy 2010. According to Bathia and Richie (2009: 547, “the application of Google for language learning has just begun to be tapped.” Within the framework of this study it was assumed that the students, conversant with digital technologies and using Google and the web on a regular basis, could use various search options and the search results to self-correct their errors instead of relying on their teacher to provide direct feedback. After receiving some in-class training on how to formulate Google queries, the students were asked to use a customized Google search engine limiting searches to 28 information websites to correct up to
International Nuclear Information System (INIS)
Kim, Y.P.
1982-01-01
The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy
International Nuclear Information System (INIS)
Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock
2005-01-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle
Response Repetition as an Error-Correction Strategy for Teaching Subtraction Facts
Reynolds, Jennifer L.; Drevon, Daniel D.; Schafer, Bradley; Schwartz, Kaitlyn
2016-01-01
This study examined the impact of response repetition as an error-correction strategy in teaching subtraction facts to three students with learning difficulties. Written response repetition (WRR) and oral response repetition (ORR) were compared using an alternating treatments design nested in a multiple baseline design across participants.…
Error correction, co-integration and import demand function for Nigeria
African Journals Online (AJOL)
The objective of this study is to determine empirically Import Demand equation in Nigeria using Error Correction and Cointegration techniques. All the variables employed in this study were found stationary at first difference using Augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) unit root test. Empirical evidence from ...
Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates
Czech Academy of Sciences Publication Activity Database
Gál, A.; Hansen, A. K.; Koucký, Michal; Pudlák, Pavel; Viola, E.
2013-01-01
Roč. 59, č. 10 (2013), s. 6611-6627 ISSN 0018-9448 R&D Projects: GA AV ČR IAA100190902 Institutional support: RVO:67985840 Keywords : bounded-depth circuits * error-correcting codes * hashing Subject RIV: BA - General Mathematics Impact factor: 2.650, year: 2013 http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6578188
Performance calculation of the Compact Disc error correcting code ona memoryless channel
Driessen, L.H.M.E.; Vries, L.B.
2015-01-01
Recently N.V. PHILIPS of The Netherlands and SONY CORP. of Japan made a joint pnoposal for standardization of their COMPACT DISC DigitalAudio system. This standard as agreed upon, includes the choice ofan error correcting code called CIRC (Cross Interleave Reed SolomonCode),according to which the
1972-01-01
The assembly drawings of the receiver unit are presented for the data compression/error correction digital test system. Equipment specifications are given for the various receiver parts, including the TV input buffer register, delta demodulator, TV sync generator, memory devices, and data storage devices.
Inserting Mastered Targets during Error Correction When Teaching Skills to Children with Autism
Plaisance, Lauren; Lerman, Dorothea C.; Laudont, Courtney; Wu, Wai-Ling
2016-01-01
Research has identified a variety of effective approaches for responding to errors during discrete-trial training. In one commonly used method, the therapist delivers a prompt contingent on the occurrence of an incorrect response and then re-presents the trial so that the learner has an opportunity to perform the correct response independently.…
A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes
D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)
2005-01-01
textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate
Grammar Instruction and Error Correction: A Matter of Iranian Students' Beliefs
Ganjabi, Mahyar
2011-01-01
Introduction: So far the role of grammar instruction and error correction has been mainly analyzed from the teachers' perspectives. However, learners' attitudes can also affect the effectiveness of any type of learning, especially language learning. Therefore, language learners' attitudes and beliefs should also be considered as a determining…
Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models
Hallin, M.; van den Akker, R.; Werker, B.J.M.
2012-01-01
Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the
On the Security of Digital Signature Schemes Based on Error-Correcting Codes
Xu, Sheng-bo; Doumen, J.M.; van Tilborg, Henk
We discuss the security of digital signature schemes based on error-correcting codes. Several attacks to the Xinmei scheme are surveyed, and some reasons given to explain why the Xinmei scheme failed, such as the linearity of the signature and the redundancy of public keys. Another weakness is found
The dynamics of entry, exit and profitability: an error correction approach for the retail industry
M.A. Carree (Martin); A.R. Thurik (Roy)
1994-01-01
textabstractWe develop a two equation error correction model to investigate determinants of and dynamic interaction between changes in profits and number of firms in retailing. An explicit distinction is made between the effects of actual competition among incumbants, new firms competition and
Simple Reed-Solomon Forward Error Correction (FEC) Scheme for FECFRAME
Roca, Vincent; Cunche, Mathieu; Lacan, Jérôme; Bouabdallah, Amine; Matsuzono, Kazuhisa
2013-01-01
Internet Engineering Task Force (IETF) Request for Comments 6865; This document describes a fully-specified simple Forward Error Correction (FEC) scheme for Reed-Solomon codes over the finite field (also known as the Galois Field) GF(2^^m), with 2
Some errors in respirometry of aquatic breathers: How to avoid and correct for them
DEFF Research Database (Denmark)
STEFFENSEN, JF
1989-01-01
Respirometry in closed and flow-through systems is described with the objective of pointing out problems and sources of errors involved and how to correct for them. Both closed respirometry applied to resting and active animals and intermillent-flow respirometry is described. In addition, flow...
The Use of Corpus Concordancing for Second Language Learners' Self Error-Correction
Feng, Hui-Hsien
2014-01-01
Corpus concordancing has been utilized in second language (L2) writing classrooms for a few decades. Some studies have shown that this application is helpful, to a certain degree, to learners' writing process. However, how corpus concordancing is utilized for nonnative speakers' (NNSs) self error-correction in writing, especially the pattern of…
Retesting the Limits of Data-Driven Learning: Feedback and Error Correction
Crosthwaite, Peter
2017-01-01
An increasing number of studies have looked at the value of corpus-based data-driven learning (DDL) for second language (L2) written error correction, with generally positive results. However, a potential conundrum for language teachers involved in the process is how to provide feedback on students' written production for DDL. The study looks at…
Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping
Á. Piedrafita (Álvaro); J.M. Renes (Joseph)
2017-01-01
textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve
Directory of Open Access Journals (Sweden)
Xiaofang Kong
2018-01-01
Full Text Available Inclinometer assembly error is one of the key factors affecting the measurement accuracy of photoelectric measurement systems. In order to solve the problem of the lack of complete attitude information in the measurement system, this paper proposes a new inclinometer assembly error calibration and horizontal image correction method utilizing plumb lines in the scenario. Based on the principle that the plumb line in the scenario should be a vertical line on the image plane when the camera is placed horizontally in the photoelectric system, the direction cosine matrix between the geodetic coordinate system and the inclinometer coordinate system is calculated firstly by three-dimensional coordinate transformation. Then, the homography matrix required for horizontal image correction is obtained, along with the constraint equation satisfying the inclinometer-camera system requirements. Finally, the assembly error of the inclinometer is calibrated by the optimization function. Experimental results show that the inclinometer assembly error can be calibrated only by using the inclination angle information in conjunction with plumb lines in the scenario. Perturbation simulation and practical experiments using MATLAB indicate the feasibility of the proposed method. The inclined image can be horizontally corrected by the homography matrix obtained during the calculation of the inclinometer assembly error, as well.
Improved HDRG decoders for qudit and non-Abelian quantum error correction
Hutter, Adrian; Loss, Daniel; Wootton, James R.
2015-03-01
Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...... in the Fibrinogen Studies Collaboration to assess the relationship between usual levels of plasma fibrinogen and the risk of coronary heart disease, allowing for measurement error in plasma fibrinogen and several confounders Udgivelsesdato: 2009/3/30...
Directory of Open Access Journals (Sweden)
Yuriy YATSUK
2015-06-01
Full Text Available Since during design it is impossible to use the uncertainty approach because the measurement results are still absent and as noted the error approach that can be successfully applied taking as true the nominal value of instruments transformation function. Limiting possibilities of additive error correction of measuring instruments for Cyber-Physical Systems are studied basing on general and special methods of measurement. Principles of measuring circuit maximal symmetry and its minimal reconfiguration are proposed for measurement or/and calibration. It is theoretically justified for the variety of correction methods that minimum additive error of measuring instruments exists under considering the real equivalent parameters of input electronic switches. Terms of self-calibrating and verification the measuring instruments in place are studied.
Correction method for the error of diamond tool's radius in ultra-precision cutting
Wang, Yi; Yu, Jing-chi
2010-10-01
The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.
Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers.
Mainsah, Boyla O; Morton, Kenneth D; Collins, Leslie M; Sellers, Eric W; Throckmorton, Chandra S
2015-09-01
P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies ( > 70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35-185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (-47-0%) . Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44-416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43-433%).
Neural Network Based Real-time Correction of Transducer Dynamic Errors
Roj, J.
2013-12-01
In order to carry out real-time dynamic error correction of transducers described by a linear differential equation, a novel recurrent neural network was developed. The network structure is based on solving this equation with respect to the input quantity when using the state variables. It is shown that such a real-time correction can be carried out using simple linear perceptrons. Due to the use of a neural technique, knowledge of the dynamic parameters of the transducer is not necessary. Theoretical considerations are illustrated by the results of simulation studies performed for the modeled second order transducer. The most important properties of the neural dynamic error correction, when emphasizing the fundamental advantages and disadvantages, are discussed.
DEFF Research Database (Denmark)
Sørensen, Stefan; Nielsen, Hans Ove
2002-01-01
In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10%......% deviation to other methods. The correction is done by adjusting he earth return path impedance for the cable model, and will thereby form the basis for a future comparison with measured data from a real full scale earth fault experiment on a mixed line and cable network.......In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10...
Role of Refractive Errors in Inducing Asthenopic Symptoms Among Spectacle Corrected Ammetropes
Directory of Open Access Journals (Sweden)
Padma B Prabhu
2016-04-01
Full Text Available Refractive errors are a major cause of asthenopic symptoms in young age group. Aim and objectives: This study tries to ascertain the prevalence of refractive errors in a cohort of subjects with spectacle corrected ammetropia and to elucidate the relation between the type, severity and subcategories of refractive errors in such a group. Design: Descriptive cross sectional study Methods: This is a prospective analysis of cases with asthenopia and coexistant significant refractive errors warranting use of spectacles. Best corrected visual acuity of 20/20 was ensured. Retinoscopy readings after complete cycloplegia were noted. Spherical equivalent was calculated from the absolute retinoscopy reading. Ammetropia not fully corrected with spectacles, history of migraine, headache not related to constant near work, symptoms less than three months duration, associated accomodation-convergence anomalies and latent squints were excluded. Results: The study group included thirty five patients. The mean age was 23.48 years; SD 6.97. There were 15 males and 20 females. Twenty seven patients had bilateral symptoms (77.14%. Thirty six subjects (58.08% had a spherical equivalent between 0.25D to 0.75D. The refractive errors included myopia (n-10, hypermetropia (n-26 and astigmatism (n-26. Near work associated headache was observed in 39 patients (62.86%. 46.15% of the cases with near work related headache had uncorrected astigmatism. Conclusion: Asthenopic symptoms are frequent and significant among spectacle corrected ammetropes. Lower degrees of refractive errors are more symptomatic. Hypermetropia and astigmatism constitute the major causative factors.
Wind Power Prediction Based on LS-SVM Model with Error Correction
Directory of Open Access Journals (Sweden)
ZHANG, Y.
2017-02-01
Full Text Available As conventional energy sources are non-renewable, the world's major countries are investing heavily in renewable energy research. Wind power represents the development trend of future energy, but the intermittent and volatility of wind energy are the main reasons that leads to the poor accuracy of wind power prediction. However, by analyzing the error level at different time points, it can be found that the errors of adjacent time are often approximately the same, the least square support vector machine (LS-SVM model with error correction is used to predict the wind power in this paper. According to the simulation of wind power data of two wind farms, the proposed method can effectively improve the prediction accuracy of wind power, and the error distribution is concentrated almost without deviation. The improved method proposed in this paper takes into account the error correction process of the model, which improved the prediction accuracy of the traditional model (RBF, Elman, LS-SVM. Compared with the single LS-SVM prediction model in this paper, the mean absolute error of the proposed method had decreased by 52 percent. The research work in this paper will be helpful to the reasonable arrangement of dispatching operation plan, the normal operation of the wind farm and the large-scale development as well as fully utilization of renewable energy resources.
Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J
2017-11-01
Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses.
Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E
2013-12-01
In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.
A two-dimensional matrix correction for off-axis portal dose prediction errors
International Nuclear Information System (INIS)
Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.
2013-01-01
Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As
Dynamically correcting two-qubit gates against any systematic logical error
Calderon Vargas, Fernando Antonio
The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.
Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach
Islam, Mohammad Rakibul; Han, Young Shin
2011-01-01
Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732
Corrected-loss estimation for quantile regression with covariate measurement errors.
Wang, Huixia Judy; Stefanski, Leonard A; Zhu, Zhongyi
2012-06-01
We study estimation in quantile regression when covariates are measured with errors. Existing methods require stringent assumptions, such as spherically symmetric joint distribution of the regression and measurement error variables, or linearity of all quantile functions, which restrict model flexibility and complicate computation. In this paper, we develop a new estimation approach based on corrected scores to account for a class of covariate measurement errors in quantile regression. The proposed method is simple to implement. Its validity requires only linearity of the particular quantile function of interest, and it requires no parametric assumptions on the regression error distributions. Finite-sample results demonstrate that the proposed estimators are more efficient than the existing methods in various models considered.
Error analysis of motion correction method for laser scanning of moving objects
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Some contributions to non-linear physic: Mathematical problems
International Nuclear Information System (INIS)
1981-01-01
The main results contained in this report are the following: i ) Lagrangian universality holds in a precisely defined weak sense. II ) Isolation of 5th order polynomial evolution equations having high order conservation laws. III ) Hamiltonian formulation of a wide class of non-linear evolution equations. IV) Some properties of the symmetries of Gardner-like systems. v) Characterization of the range and Kernel of ζ/ζ u α , |α | - 1. vi) A generalized variational approach and application to the anharmonic oscillator. v II ) Relativistic correction and quasi-classical approximation to the anechoic oscillator. VII ) Properties of a special class of 6th-order anharmonic oscillators. ix) A new method for constructing conserved densities In PDE. (Author) 97 refs
Considering system non-linearity in transmission pricing
International Nuclear Information System (INIS)
Oloomi-Buygi, M.; Salehizadeh, M. Reza
2008-01-01
In this paper a new approach for transmission pricing is presented. The contribution of a contract on power flow of a transmission line is used as extent-of-use criterion for transmission pricing. In order to determine the contribution of each contract on power flow of each transmission line, first the contribution of each contract on each voltage angle is determined, which is called voltage angle decomposition. To this end, DC power flow is used to compute a primary solution for voltage angle decomposition. To consider the impacts of system non-linearity on voltage angle decomposition, a method is presented to determine the share of different terms of sine argument in sine value. Then the primary solution is corrected in different iterations of decoupled Newton-Raphson power flow using the presented sharing method. The presented approach is applied to a 4-bus test system and IEEE 30-bus test system and the results are analyzed. (author)
Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic genome.
Goodwin, Sara; Gurtowski, James; Ethe-Sayers, Scott; Deshpande, Panchajanya; Schatz, Michael C; McCombie, W Richard
2015-11-01
Monitoring the progress of DNA molecules through a membrane pore has been postulated as a method for sequencing DNA for several decades. Recently, a nanopore-based sequencing instrument, the Oxford Nanopore MinION, has become available, and we used this for sequencing the Saccharomyces cerevisiae genome. To make use of these data, we developed a novel open-source hybrid error correction algorithm Nanocorr specifically for Oxford Nanopore reads, because existing packages were incapable of assembling the long read lengths (5-50 kbp) at such high error rates (between ∼5% and 40% error). With this new method, we were able to perform a hybrid error correction of the nanopore reads using complementary MiSeq data and produce a de novo assembly that is highly contiguous and accurate: The contig N50 length is more than ten times greater than an Illumina-only assembly (678 kb versus 59.9 kbp) and has >99.88% consensus identity when compared to the reference. Furthermore, the assembly with the long nanopore reads presents a much more complete representation of the features of the genome and correctly assembles gene cassettes, rRNAs, transposable elements, and other genomic features that were almost entirely absent in the Illumina-only assembly. © 2015 Goodwin et al.; Published by Cold Spring Harbor Laboratory Press.
Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique
Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka
2016-06-01
Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.
Beam-Based Error Identification and Correction Methods for Particle Accelerators
AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas
2014-06-10
Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...
International Nuclear Information System (INIS)
Kang, Soo Man
2008-01-01
To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.
Energy Technology Data Exchange (ETDEWEB)
Kang, Soo Man [Dept. of Radiation Oncology, Kosin University Gospel Hospital, Busan (Korea, Republic of)
2008-09-15
To reduce side effects in image guided radiation therapy (IGRT) and to improve the quality of life of patients, also to meet accurate SETUP condition for patients, the various SETUP correction conditions were compared and evaluated by using on board imager (OBI) during the SETUP. Each 30 cases of the head, the neck, the chest, the belly, and the pelvis in 150 cases of IGRT patients was corrected after confirmation by using OBI at every 2-3 day. Also, the difference of the SETUP through the skin-marker and the anatomic SETUP through the OBI was evaluated. General SETUP errors (Transverse, Coronal, Sagittal) through the OBI at original SETUP position were Head and Neck: 1.3 mm, Brain: 2 mm, Chest: 3 mm, Abdoman: 3.7 mm, Pelvis: 4 mm. The patients with more that 3 mm in the error range were observed in the correction devices and the patient motions by confirming in treatment room. Moreover, in the case of female patients, the result came from the position of hairs during the Head and Neck, Brain tumor. Therefore, after another SETUP in each cases of over 3 mm in the error range, the treatment was carried out. Mean error values of each parts estimated after the correction were 1 mm for the head, 1.2 mm for the neck, 2.5 mm for the chest, 2.5 mm for the belly, and 2.6 mm for the pelvis. The result showed the correction of SETUP for each treatment through OBI is extremely difficult because of the importance of SETUP in radiation treatment. However, by establishing the average standard of the patients from this research result, the better patient satisfaction and treatment results could be obtained.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
In situ correction of field errors induced by temperature gradient in cryogenic undulators
Directory of Open Access Journals (Sweden)
Takashi Tanaka
2009-12-01
Full Text Available A new technique of undulator field correction for cryogenic permanent magnet undulators (CPMUs is proposed to correct the phase error induced by temperature gradient. This technique takes advantage of two important instruments: one is the in-vacuum self-aligned field analyzer with laser instrumentation system to precisely measure the distribution of the magnetic field generated by the permanent magnet arrays placed in vacuum, and the other is the differential adjuster to correct the local variation of the magnet gap. The details of the two instruments are described together with the method of how to analyze the field measurement data and deduce the gap variation along the undulator axis. The correction technique was applied to the CPMU with a length of 1.7 m and a magnetic period of 14 mm. It was found that the phase error induced during the cooling process was attributable to local gap variations of around 30 μm, which were then corrected by the differential adjuster.
Directory of Open Access Journals (Sweden)
R. Barbiero
2007-05-01
Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.
GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement
Energy Technology Data Exchange (ETDEWEB)
Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Luszczek, Piotr [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vinent [Karlsruhe Inst. of Technology (KIT) (Germany)
2011-12-14
In hardware-aware high performance computing, block- asynchronous iteration and mixed precision iterative refinement are two techniques that are applied to leverage the computing power of SIMD accelerators like GPUs. Although they use a very different approach for this purpose, they share the basic idea of compensating the convergence behaviour of an inferior numerical algorithm by a more efficient usage of the provided computing power. In this paper, we want to analyze the potential of combining both techniques. Therefore, we implement a mixed precision iterative refinement algorithm using a block-asynchronous iteration as an error correction solver, and compare its performance with a pure implementation of a block-asynchronous iteration and an iterative refinement method using double precision for the error correction solver. For matrices from theUniversity of FloridaMatrix collection,we report the convergence behaviour and provide the total solver runtime using different GPU architectures.
Directory of Open Access Journals (Sweden)
Mahmudul Mannan Toy
2011-01-01
Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.
Achieving the Heisenberg limit in quantum metrology using quantum error correction.
Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang
2018-01-08
Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements...... of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta-analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study......-specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long-term studies...
Large-time asymptotic behaviour of solutions of non-linear Sobolev-type equations
International Nuclear Information System (INIS)
Kaikina, Elena I; Naumkin, Pavel I; Shishmarev, Il'ya A
2009-01-01
The large-time asymptotic behaviour of solutions of the Cauchy problem is investigated for a non-linear Sobolev-type equation with dissipation. For small initial data the approach taken is based on a detailed analysis of the Green's function of the linear problem and the use of the contraction mapping method. The case of large initial data is also closely considered. In the supercritical case the asymptotic formulae are quasi-linear. The asymptotic behaviour of solutions of a non-linear Sobolev-type equation with a critical non-linearity of the non-convective kind differs by a logarithmic correction term from the behaviour of solutions of the corresponding linear equation. For a critical convective non-linearity, as well as for a subcritical non-convective non-linearity it is proved that the leading term of the asymptotic expression for large times is a self-similar solution. For Sobolev equations with convective non-linearity the asymptotic behaviour of solutions in the subcritical case is the product of a rarefaction wave and a shock wave. Bibliography: 84 titles.
Directory of Open Access Journals (Sweden)
Rosa M. Manchón
2010-06-01
Full Text Available Framed in a cognitively-oriented strand of research on corrective feedback (CF in SLA, the controlled three- stage (composition/comparison-noticing/revision study reported in this paper investigated the effects of two forms of direct CF (error correction and reformulation on noticing and uptake, as evidenced in the written output produced by a group of 8 secondary school EFL learners. Noticing was operationalized as the amount of corrections noticed in the comparison stage of the writing task, whereas uptake was operationally defined as the type and amount of accurate revisions incorporated in the participants’ revised versions of their original texts. Results support previous research findings on the positive effects of written CF on noticing and uptake, with a clear advantage of error correction over reformulation as far as uptake was concerned. Data also point to the existence of individual differences in the way EFL learners process and make use of CF in their writing. These findings are discussed from the perspective of the light they shed on the learning potential of CF in instructed SLA, and suggestions for future research are put forward.Enmarcado en la investigación de orden cognitivo sobre la corrección (“corrective feedback”, en este trabajo se investigó la incidencia de dos tipos de corrección escrita (corrección de errores y reformulación en los procesos de detección (noticing e incorporación (“uptake”. Ocho alumnos de inglés de Educción Secundaria participaron en un experimento que constó de tres etapas: redacción, comparación-detección y revisión. La detección se definió operacionalmente en términos del número de correcciones registradas por los alumnos durante la etapa de detección-comparación, mientras que la operacionalización del proceso de incorporación fue el tipo y cantidad de revisiones llevadas a cabo en la última etapa del experimento. Nuestros resultados confirman los hallazgos de la
Modeling Dynamics of Wikipedia: An Empirical Analysis Using a Vector Error Correction Model
Directory of Open Access Journals (Sweden)
Liu Feng-Jun
2017-01-01
Full Text Available In this paper, we constructed a system dynamic model of Wikipedia based on the co-evolution theory, and investigated the interrelationships among topic popularity, group size, collaborative conflict, coordination mechanism, and information quality by using the vector error correction model (VECM. This study provides a useful framework for analyzing the dynamics of Wikipedia and presents a formal exposition of the VECM methodology in the information system research.
Sjarif, Indra Nurcahyo; 小谷, 浩示; Lin, Ching-Yang
2011-01-01
This paper investigates the causal relationship between fishery's exports and its economic growth in Indonesia by utilizing cointegration and error-correction models. Using annual data from 1969 to 2005, we find the evidence that there exist the long-run relationship as well as bi-directional causality between exports and economic growth in Indonesia's fishery sub-sector. To the best of our knowledge, this is the first research that examine this issue focusing on a natural resource based indu...
Directory of Open Access Journals (Sweden)
Christian NZENGUE PEGNET
2011-07-01
Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.
Simple Low-Density Parity Check (LDPC) Staircase Forward Error Correction (FEC) Scheme for FECFRAME
Roca, Vincent; Cunche, Mathieu; Lacan, Jérôme
2012-01-01
Internet Engineering Task Force (IETF) Request for Comments 6816; This document describes a fully specified simple Forward Error Correction (FEC) scheme for Low-Density Parity Check (LDPC) Staircase codes that can be used to protect media streams along the lines defined by FECFRAME. These codes have many interesting properties: they are systematic codes, they perform close to ideal codes in many use-cases, and they also feature very high encoding and decoding throughputs. LDPC-Staircase codes...
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Teba, Sourou Corneille
2017-01-01
The aim of this paper is firstly, to make teachers correct thoroughly students' errors with effective strategies. Secondly, it is an attempt to find out if teachers are interested themselves in errors correction in Beninese secondary schools. Finally, I would like to point out the effective strategies that an EFL teacher can use for errors…
Begeny, John C.; Daly, Edward J., III; Valleley, Rachel J.
2006-01-01
The purpose of this study was to compare two oral reading fluency treatments (repeated readings and phrase drill error correction) which differ in the way they prompt student responding. Repeated readings (RR) and phrase drill (PD) error correction were alternated with a baseline and a reward condition within an alternating treatments design with…
Nelson, Janet S.; Alber, Sheila R.; Gordy, Alicia
2004-01-01
This investigation used a multiple-baseline design to examine the effects of systematic error correction and of systematic error correction with repeated readings on the reading accuracy and fluency of four second-graders receiving special education services in a resource room. Three of the students were identified as having learning disabilities,…
Anti-D3 branes and moduli in non-linear supergravity
Garcia del Moral, Maria P.; Parameswaran, Susha; Quiroz, Norma; Zavala, Ivonne
2017-10-01
Anti-D3 branes and non-perturbative effects in flux compactifications spontaneously break supersymmetry and stabilise moduli in a metastable de Sitter vacua. The low energy 4D effective field theory description for such models would be a supergravity theory with non-linearly realised supersymmetry. Guided by string theory modular symmetry, we compute this non-linear supergravity theory, including dependence on all bulk moduli. Using either a constrained chiral superfield or a constrained vector field, the uplifting contribution to the scalar potential from the anti-D3 brane can be parameterised either as an F-term or Fayet-Iliopoulos D-term. Using again the modular symmetry, we show that 4D non-linear supergravities that descend from string theory have an enhanced protection from quantum corrections by non-renormalisation theorems. The superpotential giving rise to metastable de Sitter vacua is robust against perturbative string-loop and α' corrections.
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Pastawski, Fernando; Yoshida, Beni; Harlow, Daniel; Preskill, John
2015-06-01
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindlerwedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in [1].
Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence
Energy Technology Data Exchange (ETDEWEB)
Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)
2015-06-23
We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
International Nuclear Information System (INIS)
Rota Kops, Elena; Herzog, Hans
2013-01-01
Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
Rota Kops, Elena; Herzog, Hans
2013-02-01
AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal
Errors in MR-based attenuation correction for brain imaging with PET/MR scanners
Energy Technology Data Exchange (ETDEWEB)
Rota Kops, Elena, E-mail: e.rota.kops@fz-juelich.de [Forschungszentrum Juelich, INM4, Juelich (Germany); Herzog, Hans [Forschungszentrum Juelich, INM4, Juelich (Germany)
2013-02-21
Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3
DEFF Research Database (Denmark)
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo
2016-01-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo....... This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel...... to cross-sectional scans of the fistulas, the major axis was on average 10.2 mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5 mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather...
Error correcting code with chip kill capability and power saving enhancement
Energy Technology Data Exchange (ETDEWEB)
Gara, Alan G [Mount Kisco, NY; Chen, Dong [Croton On Husdon, NY; Coteus, Paul W [Yorktown Heights, NY; Flynn, William T [Rochester, MN; Marcella, James A [Rochester, MN; Takken, Todd [Brewster, NY; Trager, Barry M [Yorktown Heights, NY; Winograd, Shmuel [Scarsdale, NY
2011-08-30
A method and system are disclosed for detecting memory chip failure in a computer memory system. The method comprises the steps of accessing user data from a set of user data chips, and testing the user data for errors using data from a set of system data chips. This testing is done by generating a sequence of check symbols from the user data, grouping the user data into a sequence of data symbols, and computing a specified sequence of syndromes. If all the syndromes are zero, the user data has no errors. If one of the syndromes is non-zero, then a set of discriminator expressions are computed, and used to determine whether a single or double symbol error has occurred. In the preferred embodiment, less than two full system data chips are used for testing and correcting the user data.
Intensity error correction for 3D shape measurement based on phase-shifting method
Chung, Tien-Tung; Shih, Meng-Hung
2011-12-01
3D shape measurement based on structured light system is a field of ongoing research for the past two decades. For 3D shape measurement using commercial projector and digital camera, the nonlinear gamma of the projector and the nonlinear response of the camera cause the captured fringes having both intensity and phase errors, and result in large measurement shape error. This paper presents a simple intensity error correction process for the phase-shifting method. First, a white flat board is projected with sinusoidal fringe patterns, and the intensity data is extracted from the captured image. The intensity data is fitted to an ideal sine curve. The difference between the captured curve and the fitted sine curve are used to establish an intensity look-up table (LUT). The LUT is then used to calibrate the intensities of measured object images for establishing 3D object shapes. Research results show that the measurement quality of the 3D shapes is significantly improved.
Robust chemical preservation of digital information on DNA in silica with error-correcting codes.
Grass, Robert N; Heckel, Reinhard; Puddu, Michela; Paunescu, Daniela; Stark, Wendelin J
2015-02-16
Information, such as text printed on paper or images projected onto microfilm, can survive for over 500 years. However, the storage of digital information for time frames exceeding 50 years is challenging. Here we show that digital information can be stored on DNA and recovered without errors for considerably longer time frames. To allow for the perfect recovery of the information, we encapsulate the DNA in an inorganic matrix, and employ error-correcting codes to correct storage-related errors. Specifically, we translated 83 kB of information to 4991 DNA segments, each 158 nucleotides long, which were encapsulated in silica. Accelerated aging experiments were performed to measure DNA decay kinetics, which show that data can be archived on DNA for millennia under a wide range of conditions. The original information could be recovered error free, even after treating the DNA in silica at 70 °C for one week. This is thermally equivalent to storing information on DNA in central Europe for 2000 years. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulation Model for Correction and Modeling of Probe Head Errors in Five-Axis Coordinate Systems
Directory of Open Access Journals (Sweden)
Adam Gąska
2016-05-01
Full Text Available Simulative methods are nowadays frequently used in metrology for the simulation of measurement uncertainty and the prediction of errors that may occur during measurements. In coordinate metrology, such methods are primarily used with the typical three-axis Coordinate Measuring Machines (CMMs, and lately, also with mobile measuring systems. However, no similar simulative models have been developed for five-axis systems in spite of their growing popularity in recent years. This paper presents the numerical model of probe head errors for probe heads that are used in five-axis coordinate systems. The model is based on measurements of material standards (standard ring and the use of the Monte Carlo method combined with select interpolation methods. The developed model may be used in conjunction with one of the known models of CMM kinematic errors to form a virtual model of a five-axis coordinate system. In addition, the developed methodology allows for the correction of identified probe head errors, thus improving measurement accuracy. Subsequent verification tests prove the correct functioning of the presented model.
International Nuclear Information System (INIS)
Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C
2017-01-01
The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)
DEFF Research Database (Denmark)
Garde, Henrik
2018-01-01
. For a fair comparison, exact matrix characterizations are used when probing the monotonicity relations to avoid errors from numerical solution to PDEs and numerical integration. Using a special factorization of the Neumann-to-Dirichlet map also makes the non-linear method as fast as the linear method...
Mangels, Jennifer A; Hoxha, Olta; Lane, Sean P; Jarvis, Shoshana N; Downey, Geraldine
2017-08-01
For individuals high in Rejection Sensitivity (RS), a learned orientation to anxiously expect rejection from valued others, negative feedback from social sources may disrupt engagement with learning opportunities, impeding recovery from mistakes. One context in which this disruption may be particularly pronounced is among women high in RS following evaluation by a male in authority. To investigate this prediction, 40 college students (50% female) answered general knowledge questions followed by immediate performance feedback and the correct answer while we recorded event-related potentials. Error correction was measured with a subsequent surprise retest. Performance feedback was either nonsocial (asterisk/tone) or social (male professor's face/voice). Attention and learning were indexed respectively by the anterior frontal P3a (attentional orienting) and a set of negative-going waveforms over left inferior-posterior regions associated with successful encoding. For women, but not men, higher RS scores predicted poorer error correction in the social condition. A path analysis suggested that, for women, high RS disrupted attentional orienting to the social-evaluative performance feedback, which affected subsequent memory for the correct answer by reducing engagement with learning opportunities. These results suggest a mechanism for how social feedback may impede learning among women who are high in RS.
Terreros Lazo, Oscar
2012-01-01
In this article, you will find how autonomous students of EFL in Lima, Peru can be when they recognize and correct their errors based on the teachers' guidance about what to look for and how to do it in a process that I called "Error Hunting" during regular class activities without interfering with these activities.
Practical retrace error correction in non-null aspheric testing: A comparison
Shi, Tu; Liu, Dong; Zhou, Yuhao; Yan, Tianliang; Yang, Yongying; Zhang, Lei; Bai, Jian; Shen, Yibing; Miao, Liang; Huang, Wei
2017-01-01
In non-null aspheric testing, retrace error forms the primary error source, making it hard to recognize the desired figure error from the aliasing interferograms. Careful retrace error correction is a must bearing on the testing results. Performance of three commonly employed methods in practical, i.e. the GDI (geometrical deviation based on interferometry) method, the TRW (theoretical reference wavefront) method and the ROR (reverse optimization reconstruction) method, are compared with numerical simulations and experiments. Dynamic range of these methods are sought out and the application is recommended. It is proposed that with aspherical reference wavefront, dynamic range can be further enlarged. Results show that the dynamic range of the GDI method is small while that of the TRW method can be enlarged with aspherical reference wavefront, and the ROR method achieves the largest dynamic range with highest accuracy. It is recommended that the GDI and TRW methods be applied to apertures with small figure error and small asphericity, and the ROR method for commercial and research applications calling for high accuracy and large dynamic range.
Gao, Mei-Jing; Tan, Ai-Ling; Yang, Ming; Xu, Jie; Zu, Zhen-Long; Wang, Jing-Yuan
2018-01-01
With optical micro-scanning technology, the spatial resolution of the thermal microscope imaging system can be increased without reducing the size of the detector unit or increasing the detector dimensions. Due to optical micro-scanning error, the four low-resolution images collected by micro-scanning thermal micro- scope imaging system are not standard down-sampled images. The reconstructed image quality is degraded by the direct image interpolation with error, which influences the performance of the system. Therefore, the technique to reduce the system micro-scanning error need to be studied. Based on micro-scanning technology and combined with new edge directed interpolation(NEDI) algorithm, an error correction technique for the micro-scanning instrument is proposed. Simulations and experiments show that the proposed technique can reduce the optical micro-scanning error, improve the imaging effect of the system and improve the systems spatial resolution. It can be applied to other electro-optical imaging systems to improve their resolution.
Bias correction for selecting the minimal-error classifier from many machine learning models.
Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C
2014-11-15
Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Burov, Stanislav; Figliozzi, Patrick; Lin, Binhua; Rice, Stuart A; Scherer, Norbert F; Dinner, Aaron R
2017-01-10
We present a general method for detecting and correcting biases in the outputs of particle-tracking experiments. Our approach is based on the histogram of estimated positions within pixels, which we term the single-pixel interior filling function (SPIFF). We use the deviation of the SPIFF from a uniform distribution to test the veracity of tracking analyses from different algorithms. Unbiased SPIFFs correspond to uniform pixel filling, whereas biased ones exhibit pixel locking, in which the estimated particle positions concentrate toward the centers of pixels. Although pixel locking is a well-known phenomenon, we go beyond existing methods to show how the SPIFF can be used to correct errors. The key is that the SPIFF aggregates statistical information from many single-particle images and localizations that are gathered over time or across an ensemble, and this information augments the single-particle data. We explicitly consider two cases that give rise to significant errors in estimated particle locations: undersampling the point spread function due to small emitter size and intensity overlap of proximal objects. In these situations, we show how errors in positions can be corrected essentially completely with little added computational cost. Additional situations and applications to experimental data are explored in SI Appendix In the presence of experimental-like shot noise, the precision of the SPIFF-based correction achieves (and can even exceed) the unbiased Cramér-Rao lower bound. We expect the SPIFF approach to be useful in a wide range of localization applications, including single-molecule imaging and particle tracking, in fields ranging from biology to materials science to astronomy.
Goldmann tonometry tear film error and partial correction with a shaped applanation surface.
McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M
2018-01-01
The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.
Melnikov's Method for Non-Linear Oscillators with Non-Linear Excitations
Garcia-Margallo, J.; Bejarano, J. D.
1998-04-01
The response of a non-linear oscillator of the formx+f(A,B,x)=cg(E, μ,w,k,t), wheref(A,B,x) is an odd non-linearity andcis small, forA0 is considered. The homoclinic orbits for the unperturbed system are obtained by using Jacobian elliptic functions with the generalized harmonic balance method. Also the chaotic limits of this equation are studied with a generalized Melnikov function,M0(E, μ,x,w,k,t0), depending on the variablek. A functionR0(E, μ,w,k) is defined such that there only exists chaotic motion ifE/μ>R0withkfrom 0.51 to 0.99. It is demonstrated with Poincaré maps in the phase plane that there is good agreement between these predictions and the numerical simulations of the Duffing-Holmes oscillator using the fourth-order Runge-Kutta method of numerical integration.
International Nuclear Information System (INIS)
Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.
1989-01-01
Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities
Correction of clock errors in seismic data using noise cross-correlations
Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline
2017-04-01
Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock
Correction for dynamic bias error in transmission measurements of void fraction
International Nuclear Information System (INIS)
Andersson, P.; Sundén, E. Andersson; Svärd, S. Jacobsson; Sjöstrand, H.
2012-01-01
Dynamic bias errors occur in transmission measurements, such as X-ray, gamma, or neutron radiography or tomography. This is observed when the properties of the object are not stationary in time and its average properties are assessed. The nonlinear measurement response to changes in transmission within the time scale of the measurement implies a bias, which can be difficult to correct for. A typical example is the tomographic or radiographic mapping of void content in dynamic two-phase flow systems. In this work, the dynamic bias error is described and a method to make a first-order correction is derived. A prerequisite for this method is variance estimates of the system dynamics, which can be obtained using high-speed, time-resolved data acquisition. However, in the absence of such acquisition, a priori knowledge might be used to substitute the time resolved data. Using synthetic data, a void fraction measurement case study has been simulated to demonstrate the performance of the suggested method. The transmission length of the radiation in the object under study and the type of fluctuation of the void fraction have been varied. Significant decreases in the dynamic bias error were achieved to the expense of marginal decreases in precision.
Singh, J; Singh, N N
1985-07-01
An alternating treatments design was used to measure the differential effects of two error-correction procedures (word supply and word analysis) and a no-training control condition on the number of oral-reading errors made by four moderately mentally retarded children. Results showed that when compared to the no-training control condition, both error-correction procedures greatly reduced the number of oral-reading errors of all subjects. The word-analysis method, however, was significantly more effective than was word supply. In terms of collateral behavior, the number of self-corrections of errors increased under both intervention conditions when compared to the baseline and no-training control conditions. For 2 subjects there was no difference in the rate of self-corrections under word analysis and word supply but for the other 2, a greater rate was achieved under word analysis.
Testing corrections for paleomagnetic inclination error in sedimentary rocks: A comparative approach
Tauxe, Lisa; Kodama, Kenneth P.; Kent, Dennis V.
2008-08-01
Paleomagnetic inclinations in sedimentary formations are frequently suspected of being too shallow. Recognition and correction of shallow bias is therefore critical for paleogeographical reconstructions. This paper tests the reliability of the elongation/inclination ( E/ I) correction method in several ways. First we consider the E/ I trends predicted by various PSV models. We explored the role of sample size on the reliability of the E/ I estimates and found that for data sets smaller than ˜100-150, the results were less reliable. The Giant Gaussian Process-type paleosecular variation models were all constrained by paleomagnetic data from lava flows of the last five million years. Therefore, to test whether the method can be used in more ancient times, we compare model predictions of E/ I trends with observations from five Large Igneous Provinces since the early Cretaceous (Yemen, Kerguelen, Faroe Islands, Deccan and Paraná basalts). All data are consistent at the 95% level of confidence with the E/ I trends predicted by the paleosecular variation models. The Paraná data set also illustrated the effect of unrecognized tilting and combining data over a large latitudinal spread on the E/ I estimates underscoring the necessity of adhering to the two principle assumptions of the method. Then we discuss the geological implications of various applications of the E/ I method. In general the E/ I corrected data are more consistent with data from contemporaneous lavas, with predictions from the well constrained synthetic apparent polar wander paths, and other geological constraints. Finally, we compare the E/ I corrections with corrections from an entirely different method of inclination correction: the anisotropy of remanence method of Jackson et al. [Jackson, M.J., Banerjee, S.K., Marvin, J.A., Lu, R., Gruber, W., 1991. Detrital remanence, inclination errors and anhysteretic remanence anisotropy: quantitative model and experimental results. Geophys. J. Int. 104, 95
The algebra of non-local charges in non-linear sigma models
Abdalla, Elcio; Brunelli, J C; Zadra, Ayrton
1994-01-01
We obtain the exact Dirac algebra obeyed by the conserved non-local charges in bosonic non-linear sigma models. Part of the computation is specialized for a symmetry group $O(N)$. As it turns out the algebra corresponds to a cubic deformation of the Kac-Moody algebra. The non-linear terms are computed in closed form. In each Dirac bracket we only find highest order terms (as explained in the paper), defining a saturated algebra. We generalize the results for the presence of a Wess-Zumino term. The algebra is very similar to the previous one, containing now a calculable correction of order one unit lower.
Fast simulation of non-linear pulsed ultrasound fields using an angular spectrum approach
DEFF Research Database (Denmark)
Du, Yigang; Jensen, Jørgen Arendt
2013-01-01
A fast non-linear pulsed ultrasound field simulation is presented. It is implemented based on an angular spectrum approach (ASA), which analytically solves the non-linear wave equation. The ASA solution to the Westervelt equation is derived in detail. The calculation speed is significantly...... increased compared to a numerical solution using an operator splitting method (OSM). The ASA has been modified and extended to pulsed non-linear ultrasound fields in combination with Field II, where any array transducer with arbitrary geometry, excitation, focusing and apodization can be simulated...... with a center frequency of 5 MHz. The speed is increased approximately by a factor of 140 and the calculation time is 12 min with a standard PC, when simulating the second harmonic pulse at the focal point. For the second harmonic point spread function the full width error is 1.5% at 6 dB and 6.4% at 12 d...
Single Image Super-Resolution by Non-Linear Sparse Representation and Support Vector Regression
Directory of Open Access Journals (Sweden)
Yungang Zhang
2017-02-01
Full Text Available Sparse representations are widely used tools in image super-resolution (SR tasks. In the sparsity-based SR methods, linear sparse representations are often used for image description. However, the non-linear data distributions in images might not be well represented by linear sparse models. Moreover, many sparsity-based SR methods require the image patch self-similarity assumption; however, the assumption may not always hold. In this paper, we propose a novel method for single image super-resolution (SISR. Unlike most prior sparsity-based SR methods, the proposed method uses non-linear sparse representation to enhance the description of the non-linear information in images, and the proposed framework does not need to assume the self-similarity of image patches. Based on the minimum reconstruction errors, support vector regression (SVR is applied for predicting the SR image. The proposed method was evaluated on various benchmark images, and promising results were obtained.
Applications of Kalman filters based on non-linear functions to numerical weather predictions
Directory of Open Access Journals (Sweden)
G. Galanis
2006-10-01
Full Text Available This paper investigates the use of non-linear functions in classical Kalman filter algorithms on the improvement of regional weather forecasts. The main aim is the implementation of non linear polynomial mappings in a usual linear Kalman filter in order to simulate better non linear problems in numerical weather prediction. In addition, the optimal order of the polynomials applied for such a filter is identified. This work is based on observations and corresponding numerical weather predictions of two meteorological parameters characterized by essential differences in their evolution in time, namely, air temperature and wind speed. It is shown that in both cases, a polynomial of low order is adequate for eliminating any systematic error, while higher order functions lead to instabilities in the filtered results having, at the same time, trivial contribution to the sensitivity of the filter. It is further demonstrated that the filter is independent of the time period and the geographic location of application.
Range walk error correction and modeling on Pseudo-random photon counting system
Shen, Shanshan; Chen, Qian; He, Weiji
2017-08-01
Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.
Identifying and Correcting Timing Errors at Seismic Stations in and around Iran
International Nuclear Information System (INIS)
Syracuse, Ellen Marie; Phillips, William Scott; Maceira, Monica; Begnaud, Michael Lee
2017-01-01
A fundamental component of seismic research is the use of phase arrival times, which are central to event location, Earth model development, and phase identification, as well as derived products. Hence, the accuracy of arrival times is crucial. However, errors in the timing of seismic waveforms and the arrival times based on them may go unidentified by the end user, particularly when seismic data are shared between different organizations. Here, we present a method used to analyze travel-time residuals for stations in and around Iran to identify time periods that are likely to contain station timing problems. For the 14 stations with the strongest evidence of timing errors lasting one month or longer, timing corrections are proposed to address the problematic time periods. Finally, two additional stations are identified with incorrect locations in the International Registry of Seismograph Stations, and one is found to have erroneously reported arrival times in 2011.
Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction
DEFF Research Database (Denmark)
Rasmussen, Anders; Yankov, Metodi Plamenov; Berger, Michael Stübert
2014-01-01
is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to facilitate......In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity...... the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself...
Tackling non-linearities with the effective field theory of dark energy and modified gravity
Frusciante, Noemi; Papadomanolakis, Georgios
2017-12-01
We present the extension of the effective field theory framework to the mildly non-linear scales. The effective field theory approach has been successfully applied to the late time cosmic acceleration phenomenon and it has been shown to be a powerful method to obtain predictions about cosmological observables on linear scales. However, mildly non-linear scales need to be consistently considered when testing gravity theories because a large part of the data comes from those scales. Thus, non-linear corrections to predictions on observables coming from the linear analysis can help in discriminating among different gravity theories. We proceed firstly by identifying the necessary operators which need to be included in the effective field theory Lagrangian in order to go beyond the linear order in perturbations and then we construct the corresponding non-linear action. Moreover, we present the complete recipe to map any single field dark energy and modified gravity models into the non-linear effective field theory framework by considering a general action in the Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we proceed to map the beyond-Horndeski theory and low-energy Hořava gravity into the effective field theory formalism. As a final step we derived the 4th order action in term of the curvature perturbation. This allowed us to identify the non-linear contributions coming from the linear order perturbations which at the next order act like source terms. Moreover, we confirm that the stability requirements, ensuring the positivity of the kinetic term and the speed of propagation for scalar mode, are automatically satisfied once the viability of the theory is demanded at linear level. The approach we present here will allow to construct, in a model independent way, all the relevant predictions on observables at mildly non-linear scales.
Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat
2014-04-01
Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.
Isotopic effects on non-linearity, molecular radius and intermolecular ...
Indian Academy of Sciences (India)
study the isotopic effects on the non-linearity parameter and the physicochemical proper- ties of the liquids, which in turn has been used to study their effect on the intermolecular interactions produced thereof. Keywords. Non-linearity parameter; molecular radius; free length; intermolecular inter- actions. PACS Nos 43.25.
Non-linear wave packet dynamics of coherent states
Indian Academy of Sciences (India)
We have compared the non-linear wave packet dynamics of coherent states of various symmetry groups and found that certain generic features of non-linear evolution are present in each case. Thus the initial coherent structures are quickly destroyed but are followed by Schrödinger cat formation and revival. We also report ...
Identification of Non-Linear Structures using Recurrent Neural Networks
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.
Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....
Identification of Non-Linear Structures using Recurrent Neural Networks
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.
1995-01-01
Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....
Non-Linear Asset Valuation on Markets with Frictions
De Waegenaere, A.M.B.; Kast, R.; Lapied, A.
1996-01-01
This paper provides a non-linear pricing rule for the valuation of assets on financial markets with intermediaries.The non-linearity arises from the fact that dealers charge a price for their intermediation between buyer and seller. The pricing rule we propose is an alternative for the wellknown
Non-linearity aspects in the design of submarine pipelines
Fernández, M.L.
1981-01-01
An arbitrary attempt has been made to classify and discuss some non-linearity aspects related to design, construction and operation of submarine pipelines. Non-linearities usually interrelate and take part of a comprehensive design, making difficult to quantify their individual influence or
Linearity and Non-linearity of Photorefractive effect in Materials ...
African Journals Online (AJOL)
In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...
On the design of approximate non-linear parametric controllers
Savaresi, Sergio M.; Nijmeijer, Henk; Guardabassi, Guido O.
2000-01-01
This paper focuses on the design of non-linear parametric controllers, around a nominal input/output trajectory of a discrete-time non-linear system. The main result provided herein is a relationship between the tracking performance of the closed-loop control system in the neighbourhood of a nominal
Algorithms for non-linear M-estimation
DEFF Research Database (Denmark)
Madsen, Kaj; Edlund, O; Ekblom, H
1997-01-01
In non-linear regression, the least squares method is most often used. Since this estimator is highly sensitive to outliers in the data, alternatives have became increasingly popular during the last decades. We present algorithms for non-linear M-estimation. A trust region approach is used, where...
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
International Nuclear Information System (INIS)
Gauvain, J.; Hoffmann, A.; Jeandidier, C.; Livolant, M.
1978-01-01
This study presents the tests of a reinforced concrete beam conducted by the Department of Mechanical and Thermal Studies at the Centre d'Etudes Nucleaires de Saclay, France. The actual behavior of nuclear power plant buildings submitted to seismic loads is generally non linear even for moderate seismic levels. The non-linearity is specially important for reinforced concrete type buildings. To estimate the safety factors when the building is designed by standard methods, accurate non linear calculations are necessary. For such calculations one of the most difficult point is to define a correct model for the behavior of a reinforced concrete beam subject to reversed loads. For that purpose, static and dynamic experimental tests on a shaking table have been carried out and a model reasonably accurate has been established and checked on the test results [fr
Non-linear dielectric monitoring of biological suspensions
International Nuclear Information System (INIS)
Treo, E F; Felice, C J
2007-01-01
Non-linear dielectric spectroscopy as a tool for in situ monitoring of enzyme assumes a non-linear behavior of the sample when a sinusoidal voltage is applied to it. Even many attempts have been made to improve the original experiments, all of them had limited success. In this paper we present upgrades made to a non-linear dielectric spectrometer developed and the results obtained when using different cells. We emphasized on the electrode surface, characterizing the grinding and polishing procedure. We found that the biological medium does not behave as expected, and the non-linear response is generated in the electrode-electrolyte interface. The electrochemistry of this interface can bias unpredictably the measured non-linear response
Non-linear seismic analysis of structures coupled with fluid
International Nuclear Information System (INIS)
Descleve, P.; Derom, P.; Dubois, J.
1983-01-01
This paper presents a method to calculate non-linear structure behaviour under horizontal and vertical seismic excitation, making possible the full non-linear seismic analysis of a reactor vessel. A pseudo forces method is used to introduce non linear effects and the problem is solved by superposition. Two steps are used in the method: - Linear calculation of the complete model. - Non linear analysis of thin shell elements and calculation of seismic induced pressure originating from linear and non linear effects, including permanent loads and thermal stresses. Basic aspects of the mathematical formulation are developed. It has been applied to axi-symmetric shell element using a Fourier series solution. For the fluid interaction effect, a comparison is made with a dynamic test. In an example of application, the displacement and pressure time history are given. (orig./GL)
Dyson-Schwinger equations for the non-linear σ-model
International Nuclear Information System (INIS)
Drouffe, J.M.; Flyvbjerg, H.
1989-08-01
Dyson-Schwinger equations for the O(N)-symmetric non-linear σ-model are derived. They are polynomials in N, hence 1/N-expanded ab initio. A finite, closed set of equations is obtained by keeping only the leading term and the first correction term in this 1/N-series. These equations are solved numerically in two dimensions on square lattices measuring 50x50, 100x100, 200x200, and 400x400. They are also solved analytically at strong coupling and at weak coupling in a finite volume. In these two limits the solution is asymptotically identical to the exact strong- and weak-coupling series through the first three terms. Between these two limits, results for the magnetic susceptibility and the mass gap are identical to the Monte Carlo results available for N=3 and N=4 within a uniform systematic error of O(1/N 3 ), i.e. the results seem good to O(1/N 2 ), though obtained from equations that are exact only to O(1/N). This is understood by seeing the results as summed infinite subseries of the 1/N-series for the exact susceptibility and mass gap. We conclude that the kind of 1/N-expansion presented here converges as well as one might ever hope for, even for N as small as 3. (orig.)
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.
Gilra, Aditya; Gerstner, Wulfram
2017-11-27
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Testing correction for paleomagnetic inclination error in sedimentary rocks: a comparative approach
Tauxe, L.; Kodama, K. P.; Kent, D. V.
2008-05-01
Paleomagnetic inclinations in sedimentary formations are frequently suspected of being too shallow. Recognition and correction of shallow bias is therefore critical for paleogeographical reconstructions. The elongation/inclination (E/I) correction method of Tauxe and Kent (2004) relies on the twin assumptions that inclination flattening follows the empirical sedimentary flattening formula and that the distribution of paleomagnetic directions can be predicted from a paleosecular variation (PSV) model. We will test the reliability of the E/I correction method in several ways. First we consider the E/I trends predicted by various PSV models. The Giant Gaussian Process-type paleosecular variation models were all constrained by paleomagnetic data from lava flows of the last five million years. Therefore, to test whether the method can be used in more ancient times, we will compare model predictions of E/I trends with observations from four Large Igneous Provinces since the Jurassic (Yemen, Kerguelen, Faroe Islands, and Deccan basalts). All data are consistent at the 95% level of confidence with the elongation/inclination trends predicted by the paleosecular variation models. Then we will then discuss the geological implications of various applications of the E/I method. In general the E/I corrected data are more consistent with data from contemporaneous lavas, with predictions from the well constrained synthetic apparent polar wander paths, and other geological constraints. Finally, we will compare the E/I corrections with corrections from an entirely different method of inclination correction: the anisotropy of remanence method of Jackson et al. (1991) which relies on measurement of remanence and particle anisotropies of the sediments. In the two cases where a direct comparison can be made, the two methods give corrections that are consistent within error. In summary, it appears that the elongation/inclination method for recognizing and corrected the effects of
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
Liebovitch, Larry
1998-03-01
evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
Non-linear and signal energy optimal asymptotic filter design
Directory of Open Access Journals (Sweden)
Josef Hrusak
2003-10-01
Full Text Available The paper studies some connections between the main results of the well known Wiener-Kalman-Bucy stochastic approach to filtering problems based mainly on the linear stochastic estimation theory and emphasizing the optimality aspects of the achieved results and the classical deterministic frequency domain linear filters such as Chebyshev, Butterworth, Bessel, etc. A new non-stochastic but not necessarily deterministic (possibly non-linear alternative approach called asymptotic filtering based mainly on the concepts of signal power, signal energy and a system equivalence relation plays an important role in the presentation. Filtering error invariance and convergence aspects are emphasized in the approach. It is shown that introducing the signal power as the quantitative measure of energy dissipation makes it possible to achieve reasonable results from the optimality point of view as well. The property of structural energy dissipativeness is one of the most important and fundamental features of resulting filters. Therefore, it is natural to call them asymptotic filters. The notion of the asymptotic filter is carried in the paper as a proper tool in order to unify stochastic and non-stochastic, linear and nonlinear approaches to signal filtering.
School-based approaches to the correction of refractive error in children.
Sharma, Abhishek; Congdon, Nathan; Patel, Mehul; Gilbert, Clare
2012-01-01
The World Health Organization estimates that 13 million children aged 5-15 years worldwide are visually impaired from uncorrected refractive error. School vision screening programs can identify and treat or refer children with refractive error. We concentrate on the findings of various screening studies and attempt to identify key factors in the success and sustainability of such programs in the developing world. We reviewed original and review articles describing children's vision and refractive error screening programs published in English and listed in PubMed, Medline OVID, Google Scholar, and Oxford University Electronic Resources databases. Data were abstracted on study objective, design, setting, participants, and outcomes, including accuracy of screening, quality of refractive services, barriers to uptake, impact on quality of life, and cost-effectiveness of programs. Inadequately corrected refractive error is an important global cause of visual impairment in childhood. School-based vision screening carried out by teachers and other ancillary personnel may be an effective means of detecting affected children and improving their visual function with spectacles. The need for services and potential impact of school-based programs varies widely between areas, depending on prevalence of refractive error and competing conditions and rates of school attendance. Barriers to acceptance of services include the cost and quality of available refractive care and mistaken beliefs that glasses will harm children's eyes. Further research is needed in areas such as the cost-effectiveness of different screening approaches and impact of education to promote acceptance of spectacle-wear. School vision programs should be integrated into comprehensive efforts to promote healthy children and their families. Copyright © 2012 Elsevier Inc. All rights reserved.
Correction Model of BeiDou Code Systematic Multipath Errors and Its Impacts on Single-frequency PPP
Directory of Open Access Journals (Sweden)
WANG Jie
2017-07-01
Full Text Available There are systematic multipath errors on BeiDou code measurements, which are range from several decimeters to larger than 1 meter. They can be divided into two categories, which are systematic variances in IGSO/MEO code measurement and in GEO code measurement. In this contribution, a methodology of correcting BeiDou GEO code multipath is proposed base on Kalman filter algorithm. The standard deviation of GEO MP Series decreases about 10%~16% after correction. The weight of code in single-frequency PPP is great, therefore, code systematic multipath errors have impact on single-frequency PPP. Our analysis indicate that about 1 m bias will be caused by these systematic errors. Then, we evaluated the improvement of single-frequency PPP accuracy after code multipath correction. The systematic errors of GEO code measurements are corrected by applying our proposed Kalman filter method. The systematic errors of IGSO and MEO code measurements are corrected by applying elevation-dependent model proposed by Wanninger and Beer. Ten days observations of four MGEX (Multi-GNSS Experiment stations are processed. The results indicate that the single-frequency PPP accuracy can be improved remarkably by applying code multipath correction. The accuracy in up direction can be improved by 65% after IGSO and MEO code multipath correction. By applying GEO code multipath correction, the accuracy in up direction can be further improved by 15%.
Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.
2009-04-01
In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar
Two-Step Single Slope/SAR ADC with Error Correction for CMOS Image Sensor
Directory of Open Access Journals (Sweden)
Fang Tang
2014-01-01
Full Text Available Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μm CMOS technology. The chip area of the proposed ADC is 7 μm × 500 μm. The measurement results show that the energy efficiency figure-of-merit (FOM of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k μm2·cycles/sample.
Developing optimal non-linear scoring function for protein design.
Hu, Changyu; Li, Xiang; Liang, Jie
2004-11-22
Motivation. Protein design aims to identify sequences compatible with a given protein fold but incompatible to any alternative folds. To select the correct sequences and to guide the search process, a design scoring function is critically important. Such a scoring function should be able to characterize the global fitness landscape of many proteins simultaneously. To find optimal design scoring functions, we introduce two geometric views and propose a formulation using a mixture of non-linear Gaussian kernel functions. We aim to solve a simplified protein sequence design problem. Our goal is to distinguish each native sequence for a major portion of representative protein structures from a large number of alternative decoy sequences, each a fragment from proteins of different folds. Our scoring function discriminates perfectly a set of 440 native proteins from 14 million sequence decoys. We show that no linear scoring function can succeed in this task. In a blind test of unrelated proteins, our scoring function misclassfies only 13 native proteins out of 194. This compares favorably with about three-four times more misclassifications when optimal linear functions reported in the literature are used. We also discuss how to develop protein folding scoring function.
LESAFFRE, Emmanuel; Mwalili, Samuel M.; Declerck, Dominique
2005-01-01
We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression mode...
International Nuclear Information System (INIS)
Glasure, Yong U.; Lee, Aie-Rie
1998-01-01
This paper examines the causality issue between energy consumption and GDP for South Korea and Singapore, with the aid of cointegration and error-correction modeling. Results of the cointegration and error-correction models indicate bidirectional causality between GDP and energy consumption for both South Korea and Singapore. However, results of the standard Granger causality tests show no causal relationship between GDP and energy consumption for South Korea and unidirectional causal relationship from energy consumption to GDP for Singapore
Frequency-domain full-waveform inversion with non-linear descent directions
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Directory of Open Access Journals (Sweden)
Tianzhou Chen
2013-09-01
Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.
A correction for emittance-measurement errors caused by finite slit and collector widths
International Nuclear Information System (INIS)
Connolly, R.C.
1992-01-01
One method of measuring the transverse phase-space distribution of a particle beam is to intercept the beam with a slit and measure the angular distribution of the beam passing through the slit using a parallel-strip collector. Together the finite widths of the slit and each collector strip form an acceptance window in phase space whose size and orientation are determined by the slit width, the strip width, and the slit-collector distance. If a beam is measured using a detector with a finite-size phase-space window, the measured distribution is different from the true distribution. The calculated emittance is larger than the true emittance, and the error depends both on the dimensions of the detector and on the Courant-Snyder parameters of the beam. Specifically, the error gets larger as the beam drifts farther from a waist. This can be important for measurements made on high-brightness beams, since power density considerations require that the beam be intercepted far from a waist. In this paper we calculate the measurement error and we show how the calculated emittance and Courant-Snyder parameters can be corrected for the effects of finite sizes of slit and collector. (Author) 5 figs., 3 refs
Directory of Open Access Journals (Sweden)
Xingming Sun
2015-07-01
Full Text Available Air temperature (AT is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS. Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR. Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui
2015-07-24
Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.
Filtering Non-Linear Transfer Functions on Surfaces.
Heitz, Eric; Nowrouzezahrai, Derek; Poulin, Pierre; Neyret, Fabrice
2014-07-01
Applying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few
Beam-Based Nonlinear Optics Corrections in Colliders
Pilat, Fulvia Caterina; Malitsky, Nikolay; Ptitsyn, Vadim
2005-01-01
A method has been developed to measure and correct operationally the non-linear effects of the final focusing magnets in colliders, which gives access to the effects of multi-pole errors by applying closed orbit bumps, and analyzing the resulting tune and orbit shifts. This technique has been tested and used during 3 years of RHIC (the Relativistic Heavy Ion Collider at BNL) operations. I will discuss here the theoretical basis of the method, the experimental set-up, the correction results, the present understanding of the machine model, the potential and limitations of the method itself as compared with other non linear correction techniques.
Correcting a fundamental error in greenhouse gas accounting related to bioenergy
International Nuclear Information System (INIS)
Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den
2012-01-01
Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.
Directory of Open Access Journals (Sweden)
Amir H Pakpour
2013-01-01
Conclusions: The Iranian version of the NEI-RQL-42 is a valid and reliable instrument to assess refractive error correction quality-of-life in Iranian patients. Moreover this questionnaire can be used to evaluate the effectiveness of interventions in patients with refractive errors.
Computer modeling of batteries from non-linear circuit elements
Waaben, S.; Federico, J.; Moskowitz, I.
1983-01-01
A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.
Directory of Open Access Journals (Sweden)
Khalifa MA
2012-12-01
Full Text Available Mounir A Khalifa,1,2 Waleed A Allam,1,2 Mohamed S Shaheen2,31Ophthalmology Department, Tanta University Eye Hospital, Tanta, Egypt; 2Horus Vision Correction Center, Alexandria, Egypt; 3Ophthalmology Department, Alexandria University, Alexandria, EgyptPurpose: To investigate the efficacy and predictability of wavefront-guided laser in situ keratomileusis (LASIK treatments using the iris registration (IR technology for the correction of refractive errors in patients with large pupils.Setting: Horus Vision Correction Center, Alexandria, Egypt.Methods: Prospective noncomparative study including a total of 52 eyes of 30 consecutive laser refractive correction candidates with large mesopic pupil diameters and myopia or myopic astigmatism. Wavefront-guided LASIK was performed in all cases using the VISX STAR S4 IR excimer laser platform. Visual, refractive, aberrometric and mesopic contrast sensitivity (CS outcomes were evaluated during a 6-month follow-up.Results: Mean mesopic pupil diameter ranged from 8.0 mm to 9.4 mm. A significant improvement in uncorrected distance visual acuity (UCDVA (P < 0.01 was found postoperatively, which was consistent with a significant refractive correction (P < 0.01. No significant change was detected in corrected distance visual acuity (CDVA (P = 0.11. Efficacy index (the ratio of postoperative UCDVA to preoperative CDVA and safety index (the ratio of postoperative CDVA to preoperative CDVA were calculated. Mean efficacy and safety indices were 1.06 ± 0.33 and 1.05 ± 0.18, respectively, and 92.31% of eyes had a postoperative spherical equivalent within ±0.50 diopters (D. Manifest refractive spherical equivalent improved significantly (P < 0.05 from a preoperative level of −3.1 ± 1.6 D (range −6.6 to 0 D to −0.1 ± 0.2 D (range −1.3 to 0.1 D at 6 months postoperative. No significant changes were found in mesopic CS (P ≥ 0.08, except CS for three cycles/degree, which improved significantly (P = 0
Refractive error and vision correction in a general sports-playing population.
Zeri, Fabrizio; Pitzalis, Sabrina; Di Vizio, Assunta; Ruffinatto, Tiziana; Egizi, Fabrizio; Di Russo, Francesco; Armstrong, Richard; Naroo, Shehzad A
2018-03-01
To evaluate, in an amateur sports-playing population, the prevalence of refractive error, the type of vision correction used during sport and attitudes toward different kinds of vision correction used in various types of sports. A questionnaire was used for people engaging in sport and data was collected from sport centres, gyms and universities that focused on the motor sciences. One thousand, five hundred and seventy-three questionnaires were collected (mean age 26.5 ± 12.9 years; 63.5 per cent male). Nearly all (93.8 per cent) subjects stated that their vision had been checked at least once. Fifty-three subjects (3.4 per cent) had undergone refractive surgery. Of the remainder who did not have refractive surgery (n = 1,519), 580 (38.2 per cent) reported a defect of vision, 474 (31.2 per cent) were myopic, 63 (4.1 per cent) hyperopic and 241 (15.9 per cent) astigmatic. Logistic regression analysis showed that the best predictors for myopia prevalence were gender (p prevalence of outdoor activity have lower prevalence of myopia. Contact lens penetration over the study sample was 18.7 per cent. Contact lenses were the favourite system of correction among people interviewed compared to spectacles and refractive surgery (p prevalence in the adult population. However, subjects engaging in outdoor sports had lower rates of myopia prevalence. Penetration of contact lens use in sport was four times higher than the overall adult population. Contact lenses were the preferred system of correction in sports compared to spectacles or refractive surgery, but this preference was affected by the type of sport practised and by the age and level of sports activity for which the preference was required. © 2017 Optometry Australia.
Linear and non-linear optics of condensed matter
International Nuclear Information System (INIS)
McLean, T.P.
1977-01-01
Part I - Linear optics: 1. General introduction. 2. Frequency dependence of epsilon(ω, k vector). 3. Wave-vector dependence of epsilon(ω, k vector). 4. Tensor character of epsilon(ω, k vector). Part II - Non-linear optics: 5. Introduction. 6. A classical theory of non-linear response in one dimension. 7. The generalization to three dimensions. 8. General properties of the polarizability tensors. 9. The phase-matching condition. 10. Propagation in a non-linear dielectric. 11. Second harmonic generation. 12. Coupling of three waves. 13. Materials and their non-linearities. 14. Processes involving energy exchange with the medium. 15. Two-photon absorption. 16. Stimulated Raman effect. 17. Electro-optic effects. 18. Limitations of the approach presented here. (author)
Non-linear realization of supersymmetry in de Sitter space
Zumino, B
1977-01-01
The author derives the non-linear transformation law and the non- linear Lagrangian for a Goldstone spinor corresponding to spontaneous breaking of global supersymmetry in a de Sitter space with O(3,2) invariance (anti de Sitter space). With a suitable choice of the Goldstone spinor field the Lagrangian agrees with the form suggested by the coupling to supergravity. The construction is also valid for the case of extended supersymmetry. (21 refs).
Luo, Hongyuan; Wang, Deyun; Yue, Chenqiang; Liu, Yanling; Guo, Haixiang
2018-03-01
In this paper, a hybrid decomposition-ensemble learning paradigm combining error correction is proposed for improving the forecast accuracy of daily PM10 concentration. The proposed learning paradigm is consisted of the following two sub-models: (1) PM10 concentration forecasting model; (2) error correction model. In the proposed model, fast ensemble empirical mode decomposition (FEEMD) and variational mode decomposition (VMD) are applied to disassemble original PM10 concentration series and error sequence, respectively. The extreme learning machine (ELM) model optimized by cuckoo search (CS) algorithm is utilized to forecast the components generated by FEEMD and VMD. In order to prove the effectiveness and accuracy of the proposed model, two real-world PM10 concentration series respectively collected from Beijing and Harbin located in China are adopted to conduct the empirical study. The results show that the proposed model performs remarkably better than all other considered models without error correction, which indicates the superior performance of the proposed model.
The design and use of an error correction information system for NASTRAN
Rosser, D. C., Jr.
1974-01-01
Error Correction Information System (ECIS) is a system for a two-way transmittal of NASTRAN maintenance information via a data base stored on a nationwide accessible computer. ECIS consists of two data bases. The first data base is used for comments, reporting NASTRAN Software Problem Reports (SPR's) and bookkeeping information which can be updated by the user or the NASTRAN Office. The second data base is used by the NSMO to store all SPR information and updates. The hardware needed by an accessing user is any desktop computer terminal and a telephone to communicate with the central computer. The instruction format is an engineering oriented language and requires less than an hour to obtain a working knowledge of its functions.
Algebra for applications cryptography, secret sharing, error-correcting, fingerprinting, compression
Slinko, Arkadii
2015-01-01
This book examines the relationship between mathematics and data in the modern world. Indeed, modern societies are awash with data which must be manipulated in many different ways: encrypted, compressed, shared between users in a prescribed manner, protected from an unauthorised access and transmitted over unreliable channels. All of these operations can be understood only by a person with knowledge of basics in algebra and number theory. This book provides the necessary background in arithmetic, polynomials, groups, fields and elliptic curves that is sufficient to understand such real-life applications as cryptography, secret sharing, error-correcting, fingerprinting and compression of information. It is the first to cover many recent developments in these topics. Based on a lecture course given to third-year undergraduates, it is self-contained with numerous worked examples and exercises provided to test understanding. It can additionally be used for self-study.
'Ancient episteme' and the nature of fossils: a correction of a modern scholarly error.
Jordan, J M
2016-04-01
Beginning the nineteenth-century and continuing down to the present, many authors writing on the history of geology and paleontology have attributed the theory that fossils were inorganic formations produced within the earth, rather than by the deposition of living organisms, to the ancient Greeks and Romans. Some have even gone so far as to claim this was the consensus view in the classical period up through the Middle Ages. In fact, such a notion was entirely foreign to ancient and medieval thought and only appeared within the manifold of 'Renaissance episteme,' the characteristics of which have often been projected backwards by some historians onto earlier periods. This paper endeavors to correct this error, explain the development of the Renaissance view, describe certain ancient precedents thereof, and trace the history of the misinterpretation in the literature.
Bound on quantum computation time: Quantum error correction in a critical environment
International Nuclear Information System (INIS)
Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.
2010-01-01
We obtain an upper bound on the time available for quantum computation for a given quantum computer and decohering environment with quantum error correction implemented. First, we derive an explicit quantum evolution operator for the logical qubits and show that it has the same form as that for the physical qubits but with a reduced coupling strength to the environment. Using this evolution operator, we find the trace distance between the real and ideal states of the logical qubits in two cases. For a super-Ohmic bath, the trace distance saturates, while for Ohmic or sub-Ohmic baths, there is a finite time before the trace distance exceeds a value set by the user.
Directory of Open Access Journals (Sweden)
Akhsyim Afandi
2017-03-01
Full Text Available There was a question whether monetary policy works through bank lending channelrequired a monetary-induced change in bank loans originates from the supply side. Mostempirical studies that employed vector autoregressive (VAR models failed to fulfill thisrequirement. Aiming to offer a solution to this identification problem, this paper developed afive-variable vector error correction (VEC model of two separate bank credit markets inIndonesia. Departing from previous studies, the model of each market took account of onestructural break endogenously determined by implementing a unit root test. A cointegrationtest that took account of one structural break suggested two cointegrating vectors identifiedas bank lending supply and demand relations. The estimated VEC system for both marketssuggested that bank loans adjusted more strongly in the direction of the supply equation.
Saeki, Hiroshi; Magome, Tamotsu
2014-10-01
To compensate pressure-measurement errors caused by a synchrotron radiation environment, a precise method using a hot-cathode-ionization-gauge head with correcting electrode, was developed and tested in a simulation experiment with excess electrons in the SPring-8 storage ring. This precise method to improve the measurement accuracy, can correctly reduce the pressure-measurement errors caused by electrons originating from the external environment, and originating from the primary gauge filament influenced by spatial conditions of the installed vacuum-gauge head. As the result of the simulation experiment to confirm the performance reducing the errors caused by the external environment, the pressure-measurement error using this method was approximately less than several percent in the pressure range from 10-5 Pa to 10-8 Pa. After the experiment, to confirm the performance reducing the error caused by spatial conditions, an additional experiment was carried out using a sleeve and showed that the improved function was available.
The use of concept maps to detect and correct concept errors (mistakes
Directory of Open Access Journals (Sweden)
Ladislada del Puy Molina Azcárate
2013-02-01
Full Text Available This work proposes to detect and correct concept errors (EECC to obtain Meaningful Learning (AS. The Conductive Model does not respond to the demand of meaningful learning that implies gathering thought, feeling and action to lead students up to both compromise and responsibility. In order to respond to the society competition about knowledge and information it is necessary to change the way of teaching and learning (from conductive model to constructive model. In this context it is important not only to learn meaningfully but also to create knowledge so as to developed dissertive, creative and critical thought, and the EECC are and obstacle to cope with this. This study tries to get ride of EECC in order to get meaningful learning. For this, it is essential to elaborate a Teaching Module (MI. This teaching Module implies the treatment of concept errors by a teacher able to change the dynamic of the group in the classroom. This M.I. was used among sixth grade primary school and first grade secondary school in some state assisted schools in the North of Argentina (Tucumán and Jujuy. After evaluation, the results showed great and positive changes among the experimental groups taking into account the attitude and the academic results. Meaningful Learning was shown through pupilʼs creativity, expressions and also their ability of putting this into practice into everyday life.
A forward error correction technique using a high-speed, high-rate single chip codec
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
1989-01-01
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.
1996-01-01
Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.
Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Energy Technology Data Exchange (ETDEWEB)
Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven
2009-07-01
The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.
Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data
Directory of Open Access Journals (Sweden)
Jinhua Han
2017-01-01
Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.
Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L
2011-01-07
The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.
Linear and non-linear control of wind farms. Contribution to the grid stability
Energy Technology Data Exchange (ETDEWEB)
Fernandez, R.D. [Laboratorio de Electronica, Facultad de Ingenieria, Universidad Nacional de la Patagonia San Juan Bosco, Ciudad Universitaria, Km. 4, 9000, Comodoro Rivadavia (Argentina); Mantz, R.J. [Laboratorio de Electronica Industrial, Control e Instrumentacion (LEICI), Facultad de Ingenieria, Universidad Nacional de La Plata, CC 91, 1900, La Plata (Argentina); Comision de Investigaciones Cientificas de la Provincia de Buenos Aires, CICpBA, La Plata (Argentina); Battaiotto, P.E. [Laboratorio de Electronica Industrial, Control e Instrumentacion (LEICI), Facultad de Ingenieria, Universidad Nacional de La Plata, CC 91, 1900, La Plata (Argentina)
2010-06-15
This paper deals with linear and non-linear control of wind farms equipped with doubly-fed induction generators (DFIG). Both, active and reactive wind farm powers are employed in two independent control laws in order to increase the damping of the oscillation modes of a power system. In this way, it presented a general strategy where two correction terms are added, one by each independent control, to the normal operating condition of a wind farm. The proposed control laws are derived from the Lyapunov approach. Meanwhile for the reactive power a non-linear correction is presented, for the wind farm active power it is demonstrated that the classical proportional and inertial laws can be considered via the Lyapunov approach if wind farms are considered as real power plants, i.e. equivalent to conventional synchronous generation. Finally, some simulations are presented in order to support the theoretical considerations demonstrating the potential contributions of both control laws. (author)
Ramo, Nicole L.; Puttlitz, Christian M.
2018-01-01
Compelling evidence that many biological soft tissues display both strain- and time-dependent behavior has led to the development of fully non-linear viscoelastic modeling techniques to represent the tissue’s mechanical response under dynamic conditions. Since the current stress state of a viscoelastic material is dependent on all previous loading events, numerical analyses are complicated by the requirement of computing and storing the stress at each step throughout the load history. This requirement quickly becomes computationally expensive, and in some cases intractable, for finite element models. Therefore, we have developed a strain-dependent numerical integration approach for capturing non-linear viscoelasticity that enables calculation of the current stress from a strain-dependent history state variable stored from the preceding time step only, which improves both fitting efficiency and computational tractability. This methodology was validated based on its ability to recover non-linear viscoelastic coefficients from simulated stress-relaxation (six strain levels) and dynamic cyclic (three frequencies) experimental stress-strain data. The model successfully fit each data set with average errors in recovered coefficients of 0.3% for stress-relaxation fits and 0.1% for cyclic. The results support the use of the presented methodology to develop linear or non-linear viscoelastic models from stress-relaxation or cyclic experimental data of biological soft tissues. PMID:29293558
DEFF Research Database (Denmark)
Jimenez, M.J.; Madsen, Henrik; Bloem, J.J.
2008-01-01
This paper focuses on a method for linear or non-linear continuous time modelling of physical systems using discrete time data. This approach facilitates a more appropriate modelling of more realistic non-linear systems. Particularly concerning advanced building components, convective and radiative...... heat interchanges are non-linear effects and represent significant contributions in a variety of components such as photovoltaic integrated facades or roofs and those using these effects as passive cooling strategies, etc. Since models are approximations of the physical system and data is encumbered...... with measurement errors it is also argued that it is important to consider stochastic models. More specifically this paper advocates for using continuous-discrete stochastic state space models in the form of non-linear partially observed stochastic differential equations (SDE's)-with measurement noise...
International Nuclear Information System (INIS)
Saleh, Ahmed A.; Vu, Viet Q.; Gazder, Azdiar A.
2016-01-01
Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration. It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.
The algebra of non-local charges in non-linear sigma models
International Nuclear Information System (INIS)
Abdalla, E.; Abdalla, M.C.B.; Brunelli, J.C.; Zadra, A.
1994-01-01
It is derived the complete Dirac algebra satisfied by non-local charges conserved in non-linear sigma models. Some examples of calculation are given for the O(N) symmetry group. The resulting algebra corresponds to a saturated cubic deformation (with only maximum order terms) of the Kac-Moody algebra. The results are generalized for when a Wess-Zumino term be present. In that case the algebra contains a minor order correction (sub-saturation). (author). 1 ref
Buonaccorsi, John; Prochenka, Agnieszka; Thoresen, Magne; Ploski, Rafal
2016-09-30
Motivated by a genetic application, this paper addresses the problem of fitting regression models when the predictor is a proportion measured with error. While the problem of dealing with additive measurement error in fitting regression models has been extensively studied, the problem where the additive error is of a binomial nature has not been addressed. The measurement errors here are heteroscedastic for two reasons; dependence on the underlying true value and changing sampling effort over observations. While some of the previously developed methods for treating additive measurement error with heteroscedasticity can be used in this setting, other methods need modification. A new version of simulation extrapolation is developed, and we also explore a variation on the standard regression calibration method that uses a beta-binomial model based on the fact that the true value is a proportion. Although most of the methods introduced here can be used for fitting non-linear models, this paper will focus primarily on their use in fitting a linear model. While previous work has focused mainly on estimation of the coefficients, we will, with motivation from our example, also examine estimation of the variance around the regression line. In addressing these problems, we also discuss the appropriate manner in which to bootstrap for both inferences and bias assessment. The various methods are compared via simulation, and the results are illustrated using our motivating data, for which the goal is to relate the methylation rate of a blood sample to the age of the individual providing the sample. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Optical correction of refractive error for preventing and treating eye symptoms in computer users.
Heus, Pauline; Verbeek, Jos H; Tikka, Christina
2018-04-10
Computer users frequently complain about problems with seeing and functioning of the eyes. Asthenopia is a term generally used to describe symptoms related to (prolonged) use of the eyes like ocular fatigue, headache, pain or aching around the eyes, and burning and itchiness of the eyelids. The prevalence of asthenopia during or after work on a computer ranges from 46.3% to 68.5%. Uncorrected or under-corrected refractive error can contribute to the development of asthenopia. A refractive error is an error in the focusing of light by the eye and can lead to reduced visual acuity. There are various possibilities for optical correction of refractive errors including eyeglasses, contact lenses and refractive surgery. To examine the evidence on the effectiveness, safety and applicability of optical correction of refractive error for reducing and preventing eye symptoms in computer users. We searched the Cochrane Central Register of Controlled Trials (CENTRAL); PubMed; Embase; Web of Science; and OSH update, all to 20 December 2017. Additionally, we searched trial registries and checked references of included studies. We included randomised controlled trials (RCTs) and quasi-randomised trials of interventions evaluating optical correction for computer workers with refractive error for preventing or treating asthenopia and their effect on health related quality of life. Two authors independently assessed study eligibility and risk of bias, and extracted data. Where appropriate, we combined studies in a meta-analysis. We included eight studies with 381 participants. Three were parallel group RCTs, three were cross-over RCTs and two were quasi-randomised cross-over trials. All studies evaluated eyeglasses, there were no studies that evaluated contact lenses or surgery. Seven studies evaluated computer glasses with at least one focal area for the distance of the computer screen with or without additional focal areas in presbyopic persons. Six studies compared computer
Analytical exact solution of the non-linear Schroedinger equation
International Nuclear Information System (INIS)
Martins, Alisson Xavier; Rocha Filho, Tarcisio Marciano da
2011-01-01
Full text: In this work we present how to classify and obtain analytical solutions of the Schroedinger equation with a generic non-linearity in 1+1 dimensions. Our approach is based on the determination of Lie symmetry transformation mapping solutions into solutions, and non-classical symmetry transformations, mapping a given solution into itself. From these symmetries it is then possible to reduce the equation to a system of ordinary differential equations which can then be solved using standard methods. The generic non-linearity is handled by considering it as an additional unknown in the determining equations for the symmetry transformations. This results in an over-determined system of non-linear partial differential equations. Its solution can then be determined in some cases by reducing it to the so called involutive (triangular) form, and then solved. This reduction is very tedious and can only performed using a computer algebra system. Once the determining system is solved, we obtain the explicit form for the non-linearity admitting a Lie or non-classical symmetry. The analytical solutions are then derived by solving the reduced ordinary differential equations. The non-linear determining system for the non-classical symmetry transformations and Lie symmetry generators are obtaining using the computer algebra package SADE (symmetry analysis of differential equations), developed at our group. (author)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub
Lebrun, Claire
1976-01-01
This article analyzes three studies undertaken to scientifically define error patterns, and outlines a methodology for investigating them. The studies concern native English speakers learning French. (Text is in French.) (CLK)
COLLINARUS: collection of image-derived non-linear attributes for registration using splines
Chappelow, Jonathan; Bloch, B. Nicolas; Rofsky, Neil; Genega, Elizabeth; Lenkinski, Robert; DeWolf, William; Viswanath, Satish; Madabhushi, Anant
2009-02-01
We present a new method for fully automatic non-rigid registration of multimodal imagery, including structural and functional data, that utilizes multiple texutral feature images to drive an automated spline based non-linear image registration procedure. Multimodal image registration is significantly more complicated than registration of images from the same modality or protocol on account of difficulty in quantifying similarity between different structural and functional information, and also due to possible physical deformations resulting from the data acquisition process. The COFEMI technique for feature ensemble selection and combination has been previously demonstrated to improve rigid registration performance over intensity-based MI for images of dissimilar modalities with visible intensity artifacts. Hence, we present here the natural extension of feature ensembles for driving automated non-rigid image registration in our new technique termed Collection of Image-derived Non-linear Attributes for Registration Using Splines (COLLINARUS). Qualitative and quantitative evaluation of the COLLINARUS scheme is performed on several sets of real multimodal prostate images and synthetic multiprotocol brain images. Multimodal (histology and MRI) prostate image registration is performed for 6 clinical data sets comprising a total of 21 groups of in vivo structural (T2-w) MRI, functional dynamic contrast enhanced (DCE) MRI, and ex vivo WMH images with cancer present. Our method determines a non-linear transformation to align WMH with the high resolution in vivo T2-w MRI, followed by mapping of the histopathologic cancer extent onto the T2-w MRI. The cancer extent is then mapped from T2-w MRI onto DCE-MRI using the combined non-rigid and affine transformations determined by the registration. Evaluation of prostate registration is performed by comparison with the 3 time point (3TP) representation of functional DCE data, which provides an independent estimate of cancer
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Stability of non-linear constitutive formulations for viscoelastic fluids
Siginer, Dennis A
2014-01-01
Stability of Non-linear Constitutive Formulations for Viscoelastic Fluids provides a complete and up-to-date view of the field of constitutive equations for flowing viscoelastic fluids, in particular on their non-linear behavior, the stability of these constitutive equations that is their predictive power, and the impact of these constitutive equations on the dynamics of viscoelastic fluid flow in tubes. This book gives an overall view of the theories and attendant methodologies developed independently of thermodynamic considerations as well as those set within a thermodynamic framework to derive non-linear rheological constitutive equations for viscoelastic fluids. Developments in formulating Maxwell-like constitutive differential equations as well as single integral constitutive formulations are discussed in the light of Hadamard and dissipative type of instabilities.
Non-linear behaviour of large-area avalanche photodiodes
Fernandes, L M P; Monteiro, C M B; Santos, J M; Morgado, R E
2002-01-01
The characterisation of photodiodes used as photosensors requires a determination of the number of electron-hole pairs produced by scintillation light. One method involves comparing signals produced by X-ray absorptions occurring directly in the avalanche photodiode with the light signals. When the light is derived from light-emitting diodes in the 400-600 nm range, significant non-linear behaviour is reported. In the present work, we extend the study of the linear behaviour to large-area avalanche photodiodes, of Advanced Photonix, used as photosensors of the vacuum ultraviolet (VUV) scintillation light produced by argon (128 nm) and xenon (173 nm). We observed greater non-linearities in the avalanche photodiodes for the VUV scintillation light than reported previously for visible light, but considerably less than the non-linearities observed in other commercially available avalanche photodiodes.
Pattern formation due to non-linear vortex diffusion
Wijngaarden, Rinke J.; Surdeanu, R.; Huijbregtse, J. M.; Rector, J. H.; Dam, B.; Einfeld, J.; Wördenweber, R.; Griessen, R.
Penetration of magnetic flux in YBa 2Cu 3O 7 superconducting thin films in an external magnetic field is visualized using a magneto-optic technique. A variety of flux patterns due to non-linear vortex diffusion is observed: (1) Roughening of the flux front with scaling exponents identical to those observed in burning paper including two distinct regimes where respectively spatial disorder and temporal disorder dominate. In the latter regime Kardar-Parisi-Zhang behavior is found. (2) Fractal penetration of flux with Hausdorff dimension depending on the critical current anisotropy. (3) Penetration as ‘flux-rivers’. (4) The occurrence of commensurate and incommensurate channels in films with anti-dots as predicted in numerical simulations by Reichhardt, Olson and Nori. It is shown that most of the observed behavior is related to the non-linear diffusion of vortices by comparison with simulations of the non-linear diffusion equation appropriate for vortices.
Non linear identification applied to PWR steam generators
International Nuclear Information System (INIS)
Poncet, B.
1982-11-01
For the precise industrial purpose of PWR nuclear power plant steam generator water level control, a natural method is developed where classical techniques seem not to be efficient enough. From this essentially non-linear practical problem, an input-output identification of dynamic systems is proposed. Through Homodynamic Systems, characterized by a regularity property which can be found in most industrial processes with balance set, state form realizations are built, which resolve the exact joining of local dynamic behaviors, in both discrete and continuous time cases, avoiding any load parameter. Specifically non-linear modelling analytical means, which have no influence on local joined behaviors, are also pointed out. Non-linear autoregressive realizations allow us to perform indirect adaptive control under constraint of an admissible given dynamic family [fr
DEFF Research Database (Denmark)
Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy
2004-01-01
An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....
Buterakos, Donovan; Throckmorton, Robert E.; Das Sarma, S.
2018-01-01
In addition to magnetic field and electric charge noise adversely affecting spin-qubit operations, performing single-qubit gates on one of multiple coupled singlet-triplet qubits presents a new challenge: crosstalk, which is inevitable (and must be minimized) in any multiqubit quantum computing architecture. We develop a set of dynamically corrected pulse sequences that are designed to cancel the effects of both types of noise (i.e., field and charge) as well as crosstalk to leading order, and provide parameters for these corrected sequences for all 24 of the single-qubit Clifford gates. We then provide an estimate of the error as a function of the noise and capacitive coupling to compare the fidelity of our corrected gates to their uncorrected versions. Dynamical error correction protocols presented in this work are important for the next generation of singlet-triplet qubit devices where coupling among many qubits will become relevant.
The role of dendritic non-linearities in single neuron computation
Directory of Open Access Journals (Sweden)
Boris Gutkin
2014-05-01
Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.
Non Linear Analysis on Multi Lobe Journal Bearings
Udaya Bhaskar, S.; Manzoor Hussian, M.; Yousuf Ali, Md.
2017-08-01
Multi lobe journal bearings are used in machines which operate at high speeds and high loads. In this paper the multi lobe bearing are analyzed to determine the effect of surface roughness during non linear loading. A non-linear time transient analysis is performed using the fourth order Runge Kutta method. The finite difference method is used to predict the pressure distribution over the bearing surface. The effect of eccentric ratio is studied and the variation of attitude angle is discussed. The journal center trajectories were calculated and plotted.
Stochastic development regression on non-linear manifolds
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded...
Mathematical problems in non-linear Physics: some results
International Nuclear Information System (INIS)
1979-01-01
The basic results presented in this report are the following: 1) Characterization of the range and Kernel of the variational derivative. 2) Determination of general conservation laws in linear evolution equations, as well as bounds for the number of polynomial conserved densities in non-linear evolution equations in two independent variables of even order. 3) Construction of the most general evolution equation which has a given family of conserved densities. 4) Regularity conditions for the validity of the Lie invariance method. 5) A simple class of perturbations in non-linear wave equations. 6) Soliton solutions in generalized KdV equations. (author)
E11 and the non-linear dual graviton
Tumanov, Alexander G.; West, Peter
2018-04-01
The non-linear dual graviton equation of motion as well as the duality relation between the gravity and dual gravity fields are found in E theory by carrying out E11 variations of previously found equations of motion. As a result the equations of motion in E theory have now been found at the full non-linear level up to, and including, level three, which contains the dual graviton field. When truncated to contain fields at levels three and less, and the spacetime is restricted to be the familiar eleven dimensional space time, the equations are equivalent to those of eleven dimensional supergravity.
Implementation of neural network based non-linear predictive control
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1999-01-01
This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control...... of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...
Implementation of neural network based non-linear predictive
DEFF Research Database (Denmark)
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non-linear...... systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi...
Foundations of the non-linear mechanics of continua
Sedov, L I
1966-01-01
International Series of Monographs on Interdisciplinary and Advanced Topics in Science and Engineering, Volume 1: Foundations of the Non-Linear Mechanics of Continua deals with the theoretical apparatus, principal concepts, and principles used in the construction of models of material bodies that fill space continuously. This book consists of three chapters. Chapters 1 and 2 are devoted to the theory of tensors and kinematic applications, focusing on the little-known theory of non-linear tensor functions. The laws of dynamics and thermodynamics are covered in Chapter 3.This volume is suitable
Realization of non-linear coherent states by photonic lattices
Energy Technology Data Exchange (ETDEWEB)
Dehdashti, Shahram, E-mail: shdehdashti@zju.edu.cn; Li, Rujiang; Chen, Hongsheng, E-mail: hansomchen@zju.edu.cn [State Key Laboratory of Modern Optical Instrumentations, Zhejiang University, Hangzhou 310027 (China); The Electromagnetics Academy at Zhejiang University, Zhejiang University, Hangzhou 310027 (China); Liu, Jiarui, E-mail: jrliu@zju.edu.cn; Yu, Faxin [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China)
2015-06-15
In this paper, first, by introducing Holstein-Primakoff representation of α-deformed algebra, we achieve the associated non-linear coherent states, including su(2) and su(1, 1) coherent states. Second, by using waveguide lattices with specific coupling coefficients between neighbouring channels, we generate these non-linear coherent states. In the case of positive values of α, we indicate that the Hilbert size space is finite; therefore, we construct this coherent state with finite channels of waveguide lattices. Finally, we study the field distribution behaviours of these coherent states, by using Mandel Q parameter.
International Nuclear Information System (INIS)
Van Aert, S.; Chen, J.H.; Van Dyck, D.
2010-01-01
A widely used performance criterion in high-resolution transmission electron microscopy (HRTEM) is the information limit. It corresponds to the inverse of the maximum spatial object frequency that is linearly transmitted with sufficient intensity from the exit plane of the object to the image plane and is limited due to partial temporal coherence. In practice, the information limit is often measured from a diffractogram or from Young's fringes assuming a weak phase object scattering beyond the inverse of the information limit. However, for an aberration corrected electron microscope, with an information limit in the sub-angstrom range, weak phase objects are no longer applicable since they do not scatter sufficiently in this range. Therefore, one relies on more strongly scattering objects such as crystals of heavy atoms observed along a low index zone axis. In that case, dynamical scattering becomes important such that the non-linear and linear interaction may be equally important. The non-linear interaction may then set the experimental cut-off frequency observed in a diffractogram. The goal of this paper is to quantify both the linear and the non-linear information transfer in terms of closed form analytical expressions. Whereas the cut-off frequency set by the linear transfer can be directly related with the attainable resolution, information from the non-linear transfer can only be extracted using quantitative, model-based methods. In contrast to the historic definition of the information limit depending on microscope parameters only, the expressions derived in this paper explicitly incorporate their dependence on the structure parameters as well. In order to emphasize this dependence and to distinguish from the usual information limit, the expressions derived for the inverse cut-off frequencies will be referred to as the linear and non-linear structural information limit. The present findings confirm the well-known result that partial temporal coherence has
Directory of Open Access Journals (Sweden)
M. A. Elshafey
2014-01-01
Full Text Available This paper presents a method of error-correcting coding of digital information. Feature of this method is the treatment of cases of inversion and skip bits caused by a violation of the synchronization of the receiving and transmitting device or other factors. The article gives a brief overview of the features, characteristics, and modern methods of construction LDPC and convolutional codes, as well as considers a general model of the communication channel, taking into account the probability of bits inversion, deletion and insertion. The proposed coding scheme is based on a combination of LDPC coding and convolution coding. A comparative analysis of the proposed combined coding scheme and a coding scheme containing only LDPC coder is performed. Both of the two schemes have the same coding rate. Experiments were carried out on two models of communication channels at different probability values of bit inversion and deletion. The first model allows only random bit inversion, while the other allows both random bit inversion and deletion. In the experiments research and analysis of the delay decoding of convolutional coder is performed and the results of these experimental studies demonstrate the feasibility of planted coding scheme to improve the efficiency of data recovery that is transmitted over a communication channel with noises which allow random bit inversion and deletion without decreasing the coding rate.
Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao
2018-01-01
Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.
Quantum Error Correction: Optimal, Robust, or Adaptive? Or, Where is The Quantum Flyball Governor?
Kosut, Robert; Grace, Matthew
2012-02-01
In The Human Use of Human Beings: Cybernetics and Society (1950), Norbert Wiener introduces feedback control in this way: ``This control of a machine on the basis of its actual performance rather than its expected performance is known as feedback ... It is the function of control ... to produce a temporary and local reversal of the normal direction of entropy.'' The classic classroom example of feedback control is the all-mechanical flyball governor used by James Watt in the 18th century to regulate the speed of rotating steam engines. What is it that is so compelling about this apparatus? First, it is easy to understand how it regulates the speed of a rotating steam engine. Secondly, and perhaps more importantly, it is a part of the device itself. A naive observer would not distinguish this mechanical piece from all the rest. So it is natural to ask, where is the all-quantum device which is self regulating, ie, the Quantum Flyball Governor? Is the goal of quantum error correction (QEC) to design such a device? Devloping the computational and mathematical tools to design this device is the topic of this talk.
In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample
Wang, B.
2017-11-27
The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.
Assessment of cassava supply response in Nigeria using vector error correction model (VECM
Directory of Open Access Journals (Sweden)
Obayelu Oluwakemi Adeola
2016-12-01
Full Text Available The response of agricultural commodities to changes in price is an important factor in the success of any reform programme in agricultural sector of Nigeria. The producers of traditional agricultural commodities, such as cassava, face the world market directly. Consequently, the producer price of cassava has become unstable, which is a disincentive for both its production and trade. This study investigated cassava supply response to changes in price. Data collected from FAOSTAT from 1966 to 2010 were analysed using Vector Error Correction Model (VECM approach. The results of the VECM for the estimation of short run adjustment of the variables toward their long run relationship showed a linear deterministic trend in the data and that Area cultivated and own prices jointly explained 74% and 63% of the variation in the Nigeria cassava output in the short run and long-run respectively. Cassava prices (P<0.001 and land cultivated (P<0.1 had positive influence on cassava supply in the short-run. The short-run price elasticity was 0.38 indicating that price policies were effective in the short-run promotion of cassava production in Nigeria. However, in the long-run elasticity cassava was not responsive to price incentives significantly. This suggests that price policies are not effective in the long-run promotion of cassava production in the country owing to instability in governance and government policies.
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Nazione, Samantha; Pace, Kristin
2015-01-01
Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.
Munoz, Carlos A.
2011-01-01
Very often, second language (L2) writers commit the same type of errors repeatedly, despite being corrected directly or indirectly by teachers or peers (Semke, 1984; Truscott, 1996). Apart from discouraging teachers from providing error correction feedback, this also makes them hesitant as to what form of corrective feedback to adopt. Ferris…
Debreli, Emre; Onuk, Nazife
2016-01-01
In the area of language teaching, corrective feedback is one of the popular and hotly debated topics that have been widely explored to date. A considerable number of studies on students' preferences of error correction and the effects of error correction approaches on student achievement do exist. Moreover, much on teachers' preferences of error…
The algebra of non-local charges in non-linear sigma models
International Nuclear Information System (INIS)
Abdalla, E.; Abdalla, M.C.B.; Brunelli, J.C.; Zadra, A.
1993-07-01
We obtain the exact Dirac algebra obeyed by the conserved non-local charges in bosonic non-linear sigma models. Part of the computation is specialized for a symmetry group O(N). As it turns out the algebra corresponds to a cubic deformation of the Kac-Moody algebra. The non-linear terms are computed in closed form. In each Dirac bracket we only find highest order terms (as explained in the paper), defining a saturated algebra. We generalize the results for the presence of a Wess-Zumino term. The algebra is very similar to the previous one, containing now a calculable correction of order one unit lower. (author). 22 refs, 5 figs
Stochastic development regression on non-linear manifolds
DEFF Research Database (Denmark)
Kühnel, Line; Sommer, Stefan Horst
2017-01-01
We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion proce...
Non-Linear Vibration of Euler-Bernoulli Beams
DEFF Research Database (Denmark)
Barari, Amin; Kaliji, H. D.; Domairry, G.
2011-01-01
In this paper, variational iteration (VIM) and parametrized perturbation (PPM)methods have been used to investigate non-linear vibration of Euler-Bernoulli beams subjected to the axial loads. The proposed methods do not require small parameter in the equation which is difficult to be found...
A non-linear dissipative model of magnetism
Czech Academy of Sciences Publication Activity Database
Durand, P.; Paidarová, Ivana
2010-01-01
Roč. 89, č. 6 (2010), s. 67004 ISSN 1286-4854 R&D Projects: GA AV ČR IAA100400501 Institutional research plan: CEZ:AV0Z40400503 Keywords : non-linear dissipative model of magnetism * thermodynamics * physical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry http://epljournal.edpsciences.org/
Quantum-dot-based integrated non-linear sources
DEFF Research Database (Denmark)
Bernard, Alice; Mariani, Silvia; Andronico, Alessio
2015-01-01
The authors report on the design and the preliminary characterisation of two active non-linear sources in the terahertz and near-infrared range. The former is associated to difference-frequency generation between whispering gallery modes of an AlGaAs microring resonator, whereas the latter...
Non-linear Behavior of Curved Sandwich Panels
DEFF Research Database (Denmark)
Berggreen, Carl Christian; Jolma, P.; Karjalainen, J. P.
2003-01-01
In this paper the non-linear behavior of curved sandwich panels is investigated both numerically and experimentally. Focus is on various aspects of finite element modeling and calculation procedures. A simply supported, singly curved, CFRP/PVC sandwich panel is analyzed under uniform pressure load...
Smoothing identification of systems with small non-linearities
Czech Academy of Sciences Publication Activity Database
Kozánek, Jan; Piranda, J.
2003-01-01
Roč. 38, č. 1 (2003), s. 71-84 ISSN 0025-6455 R&D Projects: GA ČR GA101/00/1471 Institutional research plan: CEZ:AV0Z2076919 Keywords : identification * small non-linearities * smoothing methods Subject RIV: BI - Acoustics Impact factor: 0.237, year: 2003
Non-linear excitation of gravitational radiation antennae
International Nuclear Information System (INIS)
Blair, D.G.
1982-01-01
A mechanism of non-linear excitation is proposed to explain observed excess noise in gravitational radiation antennae, driven by low frequency vibration. The mechanism is analogous to the excitation of a violin string by low frequency bowing. Numerical estimates for Weber bars suspended by cables are in good agreement with observations. (Auth.)
Utilization of non-linear converters for audio amplification
DEFF Research Database (Denmark)
Iversen, Niels Elkjær; Birch, Thomas; Knott, Arnold
2012-01-01
. The introduction of non-linear converters for audio amplification defeats this limitation. A Cuk converter, designed to deliver an AC peak output voltage twice the supply voltage, is presented in this paper. A 3V prototype has been developed to prove the concept. The prototype shows that it is possible to achieve...
Effect of Integral Non-Linearity on Energy Calibration of ...
African Journals Online (AJOL)
The integral non-linearity (INL) of four spectroscopy systems, two integrated (A1 and A2) and two classical (B1 and B2) systems was determined using pulses from a random pulse generator. The effect of INL on the system's energy calibration was also determined. The effect is minimal in the classical system at high ...
Multispectral face recognition using non linear dimensionality reduction
Akhloufi, Moulay A.; Bendada, Abdelhakim; Batsale, Jean-Christophe
2009-05-01
Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). In this work, we introduce non linear dimensionality reduction approaches for multispectral face recognition. For this purpose, the following techniques were developed: global non linear techniques (Kernel-PCA, Kernel-LDA) and local non linear techniques (Local Linear Embedding, Locality Preserving Projection). The performances of these techniques were compared to classical linear techniques for face recognition like PCA and LDA. Two multispectral face recognition databases were used in our experiments: Equinox Face Recognition Database and Laval University Database. Equinox database contains images in the Visible, Short, Mid and Long waves infrared spectrums. Laval database contains images in the Visible, Near, Mid and Long waves infrared spectrums with variations in time and metabolic activity of the subjects. The obtained results are interesting and show the increase in recognition performance using local non linear dimensionality reduction techniques for infrared face recognition, particularly in near and short wave infrared spectrums.
Geometrically non linear analysis of functionally graded material ...
African Journals Online (AJOL)
user
when compared to the other engineering materials (Akhavan and Hamed, 2010). However, FGM plates under mechanical loading may undergo elastic instability. Hence, the non-linear behavior of functionally graded plates has to be understood for their optimum design. Reddy (2000) proposed the theoretical formulation ...
Non-linear thermal fluctuations in a diode
Kampen, N.G. van
As an example of non-linear noise the fluctuations in a circuit consisting of a diode and a condenser C are studied. From the master equation for this system the following results are derived. 1. (i) The equilibrium distribution of the voltage is rigorously Gaussian, the average voltage being
Canonical structure of evolution equations with non-linear ...
Indian Academy of Sciences (India)
The dispersion produced is compensated by non-linear effects resulting in the formation of exponentially localized .... determining the values of Lagrange's multipliers αis. We postulate that a slightly .... c3 «w2x -v. (36). To include the effect of the secondary constraint c3 in the total Hamiltonian H we modify. (33) as. 104.
About one non linear generalization of the compression reflection ...
African Journals Online (AJOL)
Both cases of stage and spiral iterations are considered. A geometrical interpretation of a convergence of a generalize method of iteration is brought, the case of stage and spiral iterations are considered. The formula for the non linear generalize compression reflection operator as a function from one variable is obtained.
Current algebra of classical non-linear sigma models
International Nuclear Information System (INIS)
Forger, M.; Laartz, J.; Schaeper, U.
1992-01-01
The current algebra of classical non-linear sigma models on arbitrary Riemannian manifolds is analyzed. It is found that introducing, in addition to the Noether current j μ associated with the global symmetry of the theory, a composite scalar field j, the algebra closes under Poisson brackets. (orig.)
Geometrically non linear analysis of functionally graded material ...
African Journals Online (AJOL)
Geometrically non linear analysis of functionally graded material plates using higher order theory. ... International Journal of Engineering, Science and Technology. Journal Home ... The analysis of functionally graded material (FGM) plates with material variation parameter (n), boundary conditions, aspect ratios and side to ...
Efficient algorithms for non-linear four-wave interactions
Van Vledder, G.P.
2012-01-01
This paper addresses the on-going activities in the development of efficient methods for computing the non-linear four-wave interactions in operational discrete third-generation wind-wave models. It is generally assumed that these interactions play an important role in the evolution of wind
Applications of non-linear methods in astronomy
Martens, P.C.H.
1984-01-01
In this review I discuss catastrophes, bifurcations and strange attractors in a non-mathematical manner by giving very simple examples that st ill contain the essence of the phenomenon. The salientresults of the applications of these non-linear methods in astrophysics are reviewed and include such
On iterative solution of non-linear equation | Ogbereyivwe | Journal ...
African Journals Online (AJOL)
5] developed a new algorithm based on cubic interpolation for solving non- linear equations of degree 1 to 3. The algorithm was found to be faster than the Regular falsi and the Newton Raphason method. This paper extend the algorithm to ...
An inhomogeneous wave equation and non-linear Diophantine approximation
DEFF Research Database (Denmark)
Beresnevich, V.; Dodson, M. M.; Kristensen, S.
2008-01-01
A non-linear Diophantine condition involving perfect squares and arising from an inhomogeneous wave equation on the torus guarantees the existence of a smooth solution. The exceptional set associated with the failure of the Diophantine condition and hence of the existence of a smooth solution...
Non-linear dynamics in pulse combustor: A review
Indian Academy of Sciences (India)
2015-02-19
Feb 19, 2015 ... Home; Journals; Pramana – Journal of Physics; Volume 84; Issue 3. Non-linear dynamics in ... Mechanical Engineering Department, Jadavpur University, Kolkata 700 032, India ... Proceedings of the International Workshop/Conference on Computational Condensed Matter Physics and Materials Science
Hamouda, Arafat
2011-01-01
It is no doubt that teacher written feedback plays an essential role in teaching writing skill. The present study, by use of questionnaire, investigates Saudi EFL students' and teachers' preferences and attitudes towards written error corrections. The study also aims at identifying the difficulties encountered by teachers and students during the…
Jodaie, Mina; Farrokhi, Farahman; Zoghi, Masoud
2011-01-01
This study was an attempt to compare EFL teachers' and intermediate high school students' perceptions of written corrective feedback on grammatical errors and also to specify their reasons for choosing comprehensive or selective feedback and some feedback strategies over some others. To collect the required data, the student version of…