WorldWideScience

Sample records for hits-clip decodes mirna-mrna

  1. HITS-CLIP yields genome-wide insights into brain alternative RNA processing

    Science.gov (United States)

    Licatalosi, Donny D.; Mele, Aldo; Fak, John J.; Ule, Jernej; Kayikci, Melis; Chi, Sung Wook; Clark, Tyson A.; Schweitzer, Anthony C.; Blume, John E.; Wang, Xuning; Darnell, Jennifer C.; Darnell, Robert B.

    2008-11-01

    Protein-RNA interactions have critical roles in all aspects of gene expression. However, applying biochemical methods to understand such interactions in living tissues has been challenging. Here we develop a genome-wide means of mapping protein-RNA binding sites in vivo, by high-throughput sequencing of RNA isolated by crosslinking immunoprecipitation (HITS-CLIP). HITS-CLIP analysis of the neuron-specific splicing factor Nova revealed extremely reproducible RNA-binding maps in multiple mouse brains. These maps provide genome-wide in vivo biochemical footprints confirming the previous prediction that the position of Nova binding determines the outcome of alternative splicing; moreover, they are sufficiently powerful to predict Nova action de novo. HITS-CLIP revealed a large number of Nova-RNA interactions in 3' untranslated regions, leading to the discovery that Nova regulates alternative polyadenylation in the brain. HITS-CLIP, therefore, provides a robust, unbiased means to identify functional protein-RNA interactions in vivo.

  2. DGCR8 HITS-CLIP reveals novel functions for the Microprocessor.

    Science.gov (United States)

    Macias, Sara; Plass, Mireya; Stajuda, Agata; Michlewski, Gracjan; Eyras, Eduardo; Cáceres, Javier F

    2012-08-01

    The Drosha-DGCR8 complex (Microprocessor) is required for microRNA (miRNA) biogenesis. DGCR8 recognizes the RNA substrate, whereas Drosha functions as the endonuclease. Using high-throughput sequencing and cross-linking immunoprecipitation (HITS-CLIP) we identified RNA targets of DGCR8 in human cells. Unexpectedly, miRNAs were not the most abundant targets. DGCR8-bound RNAs also comprised several hundred mRNAs as well as small nucleolar RNAs (snoRNAs) and long noncoding RNAs. We found that the Microprocessor controlled the abundance of several mRNAs as well as of MALAT1. By contrast, DGCR8-mediated cleavage of snoRNAs was independent of Drosha, suggesting the involvement of DGCR8 in cellular complexes with other endonucleases. Binding of DGCR8 to cassette exons is a new mechanism for regulation of the relative abundance of alternatively spliced isoforms. These data provide insights in the complex role of DGCR8 in controlling the fate of several classes of RNAs.

  3. Ago HITS-CLIP expands understanding of Kaposi's sarcoma-associated herpesvirus miRNA function in primary effusion lymphomas.

    Directory of Open Access Journals (Sweden)

    Irina Haecker

    Full Text Available KSHV is the etiological agent of Kaposi's sarcoma (KS, primary effusion lymphoma (PEL, and a subset of multicentricCastleman's disease (MCD. The fact that KSHV-encoded miRNAs are readily detectable in all KSHV-associated tumors suggests a potential role in viral pathogenesis and tumorigenesis. MiRNA-mediated regulation of gene expression is a complex network with each miRNA having many potential targets, and to date only few KSHV miRNA targets have been experimentally determined. A detailed understanding of KSHV miRNA functions requires high-through putribonomics to globally analyze putative miRNA targets in a cell type-specific manner. We performed Ago HITS-CLIP to identify viral and cellular miRNAs and their cognate targets in two latently KSHV-infected PEL cell lines. Ago HITS-CLIP recovered 1170 and 950 cellular KSHV miRNA targets from BCBL-1 and BC-3, respectively. Importantly, enriched clusters contained KSHV miRNA seed matches in the 3'UTRs of numerous well characterized targets, among them THBS1, BACH1, and C/EBPβ. KSHV miRNA targets were strongly enriched for genes involved in multiple pathways central for KSHV biology, such as apoptosis, cell cycle regulation, lymphocyte proliferation, and immune evasion, thus further supporting a role in KSHV pathogenesis and potentially tumorigenesis. A limited number of viral transcripts were also enriched by HITS-CLIP including vIL-6 expressed only in a subset of PEL cells during latency. Interestingly, Ago HITS-CLIP revealed extremely high levels of Ago-associated KSHV miRNAs especially in BC-3 cells where more than 70% of all miRNAs are of viral origin. This suggests that in addition to seed match-specific targeting of cellular genes, KSHV miRNAs may also function by hijacking RISCs, thereby contributing to a global de-repression of cellular gene expression due to the loss of regulation by human miRNAs. In summary, we provide an extensive list of cellular and viral miRNA targets representing an

  4. HITS-CLIP analysis uncovers a link between the Kaposi's sarcoma-associated herpesvirus ORF57 protein and host pre-mRNA metabolism.

    Directory of Open Access Journals (Sweden)

    Emi Sei

    2015-02-01

    Full Text Available The Kaposi's sarcoma associated herpesvirus (KSHV is an oncogenic virus that causes Kaposi's sarcoma, primary effusion lymphoma (PEL, and some forms of multicentric Castleman's disease. The KSHV ORF57 protein is a conserved posttranscriptional regulator of gene expression that is essential for virus replication. ORF57 is multifunctional, but most of its activities are directly linked to its ability to bind RNA. We globally identified virus and host RNAs bound by ORF57 during lytic reactivation in PEL cells using high-throughput sequencing of RNA isolated by cross-linking immunoprecipitation (HITS-CLIP. As expected, ORF57-bound RNA fragments mapped throughout the KSHV genome, including the known ORF57 ligand PAN RNA. In agreement with previously published ChIP results, we observed that ORF57 bound RNAs near the oriLyt regions of the genome. Examination of the host RNA fragments revealed that a subset of the ORF57-bound RNAs was derived from transcript 5' ends. The position of these 5'-bound fragments correlated closely with the 5'-most exon-intron junction of the pre-mRNA. We selected four candidates (BTG1, EGR1, ZFP36, and TNFSF9 and analyzed their pre-mRNA and mRNA levels during lytic phase. Analysis of both steady-state and newly made RNAs revealed that these candidate ORF57-bound pre-mRNAs persisted for longer periods of time throughout infection than control RNAs, consistent with a role for ORF57 in pre-mRNA metabolism. In addition, exogenous expression of ORF57 was sufficient to increase the pre-mRNA levels and, in one case, the mRNA levels of the putative ORF57 targets. These results demonstrate that ORF57 interacts with specific host pre-mRNAs during lytic reactivation and alters their processing, likely by stabilizing pre-mRNAs. These data suggest that ORF57 is involved in modulating host gene expression in addition to KSHV gene expression during lytic reactivation.

  5. TP Decoding

    CERN Document Server

    Lu, Yi; Montanari, Andrea

    2007-01-01

    'Tree pruning' (TP) is an algorithm for probabilistic inference on binary Markov random fields. It has been recently derived by Dror Weitz and used to construct the first fully polynomial approximation scheme for counting independent sets up to the `tree uniqueness threshold.' It can be regarded as a clever method for pruning the belief propagation computation tree, in such a way to exactly account for the effect of loops. In this paper we generalize the original algorithm to make it suitable for decoding linear codes, and discuss various schemes for pruning the computation tree. Further, we present the outcomes of numerical simulations on several linear codes, showing that tree pruning allows to interpolate continuously between belief propagation and maximum a posteriori decoding. Finally, we discuss theoretical implications of the new method.

  6. Iterative List Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree...... of codewords at minimal distance from the received vector, we also obtain new information about the code....

  7. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance is pos...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  8. High Speed Viterbi Decoder Architecture

    DEFF Research Database (Denmark)

    Paaske, Erik; Andersen, Jakob Dahl

    1998-01-01

    The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s.......The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....

  9. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  10. JSATS Decoder Software Manual

    Energy Technology Data Exchange (ETDEWEB)

    Flory, Adam E.; Lamarche, Brian L.; Weiland, Mark A.

    2013-05-01

    The Juvenile Salmon Acoustic Telemetry System (JSATS) Decoder is a software application that converts a digitized acoustic signal (a waveform stored in the .bwm file format) into a list of potential JSATS Acoustic MicroTransmitter (AMT) tagcodes along with other data about the signal including time of arrival and signal to noise ratios (SNR). This software is capable of decoding single files, directories, and viewing raw acoustic waveforms. When coupled with the JSATS Detector, the Decoder is capable of decoding in ‘real-time’ and can also provide statistical information about acoustic beacons placed within receive range of hydrophones within a JSATS array. This document details the features and functionality of the software. The document begins with software installation instructions (section 2), followed in order by instructions for decoder setup (section 3), decoding process initiation (section 4), then monitoring of beacons (section 5) using real-time decoding features. The last section in the manual describes the beacon, beacon statistics, and the results file formats. This document does not consider the raw binary waveform file format.

  11. Decoding Astronomical Concepts

    Science.gov (United States)

    Durisen, Richard H.; Pilachowski, Catherine A.

    2004-01-01

    Two astronomy professors, using the Decoding the Disciplines process, help their students use abstract theories to analyze light and to visualize the enormous scale of astronomical concepts. (Contains 5 figures.)

  12. Optimization of MPEG decoding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1999-01-01

    MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing.......MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing....

  13. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... give: a fast maximum-likelihood list decoder based on the Guruswami–Sudan algorithm; a new variant of Power decoding, Power Gao, along with some new insights into Power decoding; and a new, module based method for performing rational interpolation for theWu algorithm. We also show how to decode...

  14. Interpretability in Linear Brain Decoding

    OpenAIRE

    Kia, Seyed Mostafa; Passerini, Andrea

    2016-01-01

    Improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of brain decoding models. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, we present a simple definition for interpretability of linear brain decoding models. Then, we propose to combine the...

  15. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  16. Decoding Children's Expressions of Affect.

    Science.gov (United States)

    Feinman, Joel A.; Feldman, Robert S.

    Mothers' ability to decode the emotional expressions of their male and female children was compared to the decoding ability of non-mothers. Happiness, sadness, fear and anger were induced in children in situations that varied in terms of spontaneous and role-played encoding modes. It was hypothesized that mothers would be more accurate decoders of…

  17. Loneliness and Interpersonal Decoding Skills.

    Science.gov (United States)

    Zakahi, Walter R.; Goss, Blaine

    1995-01-01

    Finds that the romantic loneliness dimension of the Differential Loneliness Scale is related to decoding ability, and that there are moderate linear relationships among several of the dimensions of the Differential Loneliness Scale, the self-report of listening ability, and participants' view of their own decoding ability. (SR)

  18. The Formal Specifications for Protocols of Decoders

    Institute of Scientific and Technical Information of China (English)

    YUAN Meng-ting; WU Guo-qing; SHU Feng-di

    2004-01-01

    This paper presents a formal approach, FSPD (Formal Specifications for Protocols of Decoders), to specify decoder communication protocols. Based on axiomatic, FSPD is a precise language with which programmers could use only one suitable driver to handle various types of decoders. FSPD is helpful for programmers to get high adaptability and reusability of decoder-driver software.

  19. Astrophysics Decoding the cosmos

    CERN Document Server

    Irwin, Judith A

    2007-01-01

    Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.

  20. Decoding the productivity code

    DEFF Research Database (Denmark)

    Hansen, David

    .e., to be prepared to initiate improvement. The study shows how the effectiveness of the improvement system depends on the congruent fit between the five elements as well as the bridging coherence between the improvement system and the work system. The bridging coherence depends on how improvements are activated...... approach often ends up with demanding intense employee focus to sustain improvement and engagement. Likewise, a single-minded employee development approach often ends up demanding rationalization to achieve the desired financial results. These ineffective approaches make organizations react like pendulums...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period...

  1. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  2. Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    CERN Document Server

    Ling, Cong; Luzzi, Laura; Stehle, Damien

    2011-01-01

    In lattice-coded multiple-input multiple-output (MIMO) systems, optimal decoding amounts to solving the closest vector problem (CVP). Embedding is a powerful technique for the approximate CVP, yet its remarkable performance is not well understood. In this paper, we analyze the embedding technique from a bounded distance decoding (BDD) viewpoint. We prove that the Lenstra, Lenstra and Lov\\'az (LLL) algorithm can achieve 1/(2{\\gamma}) -BDD for {\\gamma} \\approx O(2^(n/4)), yielding a polynomial-complexity decoding algorithm performing exponentially better than Babai's which achieves {\\gamma} = O(2^(n/2)). This substantially improves the existing result {\\gamma} = O(2^(n)) for embedding decoding. We also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT).

  3. Decoding OvTDM with sphere-decoding algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Overlapped time division multiplexing (OvTDM) is a new type of transmission scheme with high spectrum efficiency and low threshold signal-to-noise ratio (SNR). In this article, the structure of OvTDM is introduced and the sphere-decoding algorithm of complex domain is proposed for OvTDM. Simulations demonstrate that the proposed algorithm can achieve maximum likelihood (ML) decoding with lower complexity as compared to traditional maximum likelihood sequence demodulation (MLSD) or viterbi algorithm (VA).

  4. Decode the Sodium Label Lingo

    Science.gov (United States)

    ... For Preschooler For Gradeschooler For Teen Decode the Sodium Label Lingo Published January 24, 2013 Print Email Reading food labels can help you slash sodium. Here's how to decipher them. "Sodium free" or " ...

  5. Pipelined Viterbi Decoder Using FPGA

    Directory of Open Access Journals (Sweden)

    Nayel Al-Zubi

    2013-02-01

    Full Text Available Convolutional encoding is used in almost all digital communication systems to get better gain in BER (Bit Error Rate, and all applications needs high throughput rate. The Viterbi algorithm is the solution in decoding process. The nonlinear and feedback nature of the Viterbi decoder makes its high speed implementation harder. One of promising approaches to get high throughput in the Viterbi decoder is to introduce a pipelining. This work applies a carry-save technique, which gets the advantage that the critical path in the ACS feedback becomes in one direction and get rid of carry ripple in the “Add” part of ACS unit. In this simulation and implementation show how this technique will improve the throughput of the Viterbi decoder. The design complexities for the bit-pipelined architecture are evaluated and demonstrated using Verilog HDL simulation. And a general algorithm in software that simulates a Viterbi Decoder was developed. Our research is concerned with implementation of the Viterbi Decoders for Field Programmable Gate Arrays (FPGA. Generally FPGA's are slower than custom integrated circuits but can be configured in the lab in few hours as compared to fabrication which takes few months. The design implemented using Verilog HDL and synthesized for Xilinx FPGA's.

  6. Orientation decoding: Sense in spirals?

    Science.gov (United States)

    Clifford, Colin W G; Mannion, Damien J

    2015-04-15

    The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson's (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain's processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense.

  7. Decoding Dyslexia, a Common Learning Disability

    Science.gov (United States)

    ... of this page please turn JavaScript on. Feature: Dyslexia Decoding Dyslexia, a Common Learning Disability Past Issues / Winter 2016 ... Dyslexic" Articles In Their Own Words: Dealing with Dyslexia / Decoding Dyslexia, a Common Learning Disability / What is ...

  8. Decoding, Semantic Processing, and Reading Comprehension Skill

    Science.gov (United States)

    Golinkoff, Roberta Michnick; Rosinski, Richard R.

    1976-01-01

    A set of decoding tests and picture-word interference tasks was administered to third and fifth graders to explore the relationship between single-word decoding, single-word semantic processing, and text comprehension skill. (BRT)

  9. Modular VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1991-01-01

    Proposed Reed-Solomon decoder contains multiple very-large-scale integrated (VLSI) circuit chips of same type. Each chip contains sets of logic cells and subcells performing functions from all stages of decoding process. Full decoder assembled by concatenating chips, with selective utilization of cells in particular chips. Cost of development reduced by factor of 5. In addition, decoder programmable in field and switched between 8-bit and 10-bit symbol sizes.

  10. Modular VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1991-01-01

    Proposed Reed-Solomon decoder contains multiple very-large-scale integrated (VLSI) circuit chips of same type. Each chip contains sets of logic cells and subcells performing functions from all stages of decoding process. Full decoder assembled by concatenating chips, with selective utilization of cells in particular chips. Cost of development reduced by factor of 5. In addition, decoder programmable in field and switched between 8-bit and 10-bit symbol sizes.

  11. Interference Decoding for Deterministic Channels

    CERN Document Server

    Bandemer, Bernd

    2010-01-01

    An inner bound to the capacity region of a class of three user pair deterministic interference channels is presented. The key idea is to simultaneously decode the combined interference signal and the intended message at each receiver. It is shown that this interference decoding inner bound is strictly larger than the inner bound obtained by treating interference as noise, which includes interference alignment for deterministic channels. The gain comes from judicious analysis of the number of combined interference sequences in different regimes of input distributions and message rates.

  12. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...

  13. Soft-decision decoding of RS codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of RS codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when two or more...

  14. TURBO DECODER USING LOCAL SUBSIDIARY MAXIMUM LIKELIHOOD DECODING IN PRIOR ESTIMATION OF THE EXTRINSIC INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Yang Fengfan

    2004-01-01

    A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.

  15. Behavioral approach to list decoding

    NARCIS (Netherlands)

    Polderman, Jan Willem; Kuijper, Margreta

    2002-01-01

    List decoding may be translated into a bivariate interpolation problem. The interpolation problem is to find a bivariate polynomial of minimal weighted degree that interpolates a given set of pairs taken from a finite field. We present a behavioral approach to this interpolation problem. With the da

  16. Decoding intention at sensorimotor timescales.

    Directory of Open Access Journals (Sweden)

    Mathew Salvaris

    Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.

  17. Decoding the TV Remote Control.

    Science.gov (United States)

    O'Connell, James

    2000-01-01

    Describes how to observe the pulse structure of the infrared signals from the light-emitting diode in a TV remote control. This exercise in decoding infrared digital signals provides an opportunity to discuss semiconductors, photonics technology, cryptology, and the physics of how things work. (WRM)

  18. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...

  19. On Decoding Interleaved Chinese Remainder Codes

    DEFF Research Database (Denmark)

    Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail...... with a probability dependent on the number of errors, and we give an upper bound for this. Simulation results indicate that the bound is close to the truth. We apply the proposed decoding algorithm for decoding a single CR code using the idea of “Power” decoding, suggested for Reed-Solomon codes. A combination...... of these two methods can be used to decode low-rate Interleaved Chinese Remainder codes....

  20. Multichannel Error Correction Code Decoder

    Science.gov (United States)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  1. Fingerprinting with Minimum Distance Decoding

    CERN Document Server

    Lin, Shih-Chun; Gamal, Hesham El

    2007-01-01

    This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...

  2. LDPC Decoding on GPU for Mobile Device

    Directory of Open Access Journals (Sweden)

    Yiqin Lu

    2016-01-01

    Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.

  3. Towards joint decoding of Tardos fingerprinting codes

    CERN Document Server

    Meerwald, Peter

    2011-01-01

    The class of joint decoder of probabilistic fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is, as far as we are aware of, the first practical attempt towards joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously accused users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder make precise assumptions about the collusion (size or strategy). The extension to incorporate soft outputs from the watermarking layer is straightforward. An intensive experimental work benchmarks the very good performances and offers a clear comparison with previous state-of-the-art decoders.

  4. Reduced-Latency SC Polar Decoder Architectures

    CERN Document Server

    Zhang, Chuan; Parhi, Keshab K

    2011-01-01

    Polar codes have become one of the most favorable capacity achieving error correction codes (ECC) along with their simple encoding method. However, among the very few prior successive cancellation (SC) polar decoder designs, the required long code length makes the decoding latency high. In this paper, conventional decoding algorithm is transformed with look-ahead techniques. This reduces the decoding latency by 50%. With pipelining and parallel processing schemes, a parallel SC polar decoder is proposed. Sub-structure sharing approach is employed to design the merged processing element (PE). Moreover, inspired by the real FFT architecture, this paper presents a novel input generating circuit (ICG) block that can generate additional input signals for merged PEs on-the-fly. Gate-level analysis has demonstrated that the proposed design shows advantages of 50% decoding latency and twice throughput over the conventional one with similar hardware cost.

  5. Just-in-time adaptive decoder engine: a universal video decoder based on MPEG RVC

    OpenAIRE

    Gorin, Jérôme; Yviquel, Hervé; Prêteux, Françoise; Raulet, Mickaël

    2011-01-01

    International audience; In this paper, we introduce the Just-In-Time Adaptive Decoder Engine (Jade) project, which is shipped as part of the Open RVC-CAL Compiler (Orcc) project. Orcc provides a set of open-source software tools for managing decoders standardized within MPEG by the Reconfigurable Video Coding (RVC) experts. In this framework, Jade acts as a Virtual Machine for any decoder description that uses the MPEG RVC paradigm. Jade dynamically generates a native decoder representation s...

  6. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  7. Toric Codes, Multiplicative Structure and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes...... and aligns with decoding by error correcting pairs. We have used the multiplicative structure on toric codes to construct linear secret sharing schemes with \\emph{strong multiplication} via Massey's construction generalizing the Shamir Linear secret sharing shemes constructed from Reed-Solomon codes. We have...... constructed quantum error correcting codes from toric surfaces by the Calderbank-Shor-Steane method....

  8. Analysis of peeling decoder for MET ensembles

    CERN Document Server

    Hinton, Ryan

    2009-01-01

    The peeling decoder introduced by Luby, et al. allows analysis of LDPC decoding for the binary erasure channel (BEC). For irregular ensembles, they analyze the decoder state as a Markov process and present a solution to the differential equations describing the process mean. Multi-edge type (MET) ensembles allow greater precision through specifying graph connectivity. We generalize the the peeling decoder for MET ensembles and derive analogous differential equations. We offer a new change of variables and solution to the node fraction evolutions in the general (MET) case. This result is preparatory to investigating finite-length ensemble behavior.

  9. Improved decoding for a concatenated coding system

    OpenAIRE

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, onl...

  10. Application of Beyond Bound Decoding for High Speed Optical Communications

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José;

    2013-01-01

    This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB.......This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....

  11. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  12. Decoding Generalized Reed-Solomon Codes and Its Application to RLCE Encryption Schemes

    OpenAIRE

    Wang, Yongge

    2017-01-01

    This paper presents a survey on generalized Reed-Solomon codes and various decoding algorithms: Berlekamp-Massey decoding algorithms; Berlekamp-Welch decoding algorithms; Euclidean decoding algorithms; discrete Fourier decoding algorithms, Chien's search algorithm, and Forney's algorithm.

  13. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  14. Decoder for Nonbinary CWS Quantum Codes

    CERN Document Server

    Melo, Nolmar; Portugal, Renato

    2012-01-01

    We present a decoder for nonbinary CWS quantum codes using the structure of union codes. The decoder runs in two steps: first we use a union of stabilizer codes to detect a sequence of errors, and second we build a new code, called union code, that allows to correct the errors.

  15. Overview of Decoding across the Disciplines

    Science.gov (United States)

    Boman, Jennifer; Currie, Genevieve; MacDonald, Ron; Miller-Young, Janice; Yeo, Michelle; Zettel, Stephanie

    2017-01-01

    In this chapter we describe the Decoding the Disciplines Faculty Learning Community at Mount Royal University and how Decoding has been used in new and multidisciplinary ways in the various teaching, curriculum, and research projects that are presented in detail in subsequent chapters.

  16. Design Space of Flexible Multigigabit LDPC Decoders

    Directory of Open Access Journals (Sweden)

    Philipp Schläfer

    2012-01-01

    Full Text Available Multigigabit LDPC decoders are demanded by standards like IEEE 802.15.3c and IEEE 802.11ad. To achieve the high throughput while supporting the needed flexibility, sophisticated architectures are mandatory. This paper comprehensively presents the design space for flexible multigigabit LDPC applications for the first time. The influence of various design parameters on the hardware is investigated in depth. Two new decoder architectures in a 65 nm CMOS technology are presented to further explore the design space. In the past, the memory domination was the bottleneck for throughputs of up to 1 Gbit/s. Our systematic investigation of column- versus row-based partially parallel decoders shows that this is no more a bottleneck for multigigabit architectures. The evolutionary progress in flexible multigigabit LDPC decoder design is highlighted in an extensive comparison of state-of-the-art decoders.

  17. Turbo decoding using two soft output values

    Institute of Scientific and Technical Information of China (English)

    李建平; 潘申富; 梁庆林

    2004-01-01

    It is well known that turbo decoding always begins from the first component decoder and supposes that the apriori information is "0" at the first iterative decoding. To alternatively start decoding at two component decoders, we can gain two soft output values for the received observation of an input bit. It is obvious that two soft output values comprise more sufficient extrinsic information than only one output value obtained in the conventional scheme since different start points of decoding result in different combinations of the a priori information and the input codewords with different symbol orders due to the permutation of an interleaver. Summarizing two soft output values for erery bit before making hard decisions, we can correct more errors due to their complement. Consequently, turbo codes can achieve better error correcting performance than before in this way. Simulation results show that the performance of turbo codes using the novel proposed decoding scheme can get a growing improvement with the increment of SNR in general compared to the conventional scheme. When the bit error probability is 10-5, the proposed scheme can achieve 0.5 dB asymptotic coding gain or so under the given simulation conditions.

  18. Application of RS Codes in Decoding QR Code

    Institute of Scientific and Technical Information of China (English)

    Zhu Suxia(朱素霞); Ji Zhenzhou; Cao Zhiyan

    2003-01-01

    The QR Code is a 2-dimensional matrix code with high error correction capability. It employs RS codes to generate error correction codewords in encoding and recover errors and damages in decoding. This paper presents several QR Code's virtues, analyzes RS decoding algorithm and gives a software flow chart of decoding the QR Code with RS decoding algorithm.

  19. Three phase full wave dc motor decoder

    Science.gov (United States)

    Studer, P. A. (Inventor)

    1977-01-01

    A three phase decoder for dc motors is disclosed which employs an extremely simple six transistor circuit to derive six properly phased output signals for fullwave operation of dc motors. Six decoding transistors are coupled at their base-emitter junctions across a resistor network arranged in a delta configuration. Each point of the delta configuration is coupled to one of three position sensors which sense the rotational position of the motor. A second embodiment of the invention is disclosed in which photo-optical isolators are used in place of the decoding transistors.

  20. An Encoder/Decoder Scheme of OCDMA Based on Waveguide

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new encoder/decoder scheme of OCDMA based on waveguide isproposed in this paper. The principle as well as the structure of waveguide encoder/decoder is given. It can be seen that all-optical OCDMA encoder/decoder can be realized by the proposed scheme of the waveguide encoder/decoder. It can also make the OCDMA encoder/decoder integrated easily and the access controlled easily. The system based on this scheme can work under the entirely asynchronous condition.

  1. List Decoding Tensor Products and Interleaved Codes

    CERN Document Server

    Gopalan, Parikshit; Raghavendra, Prasad

    2008-01-01

    We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. We show that for {\\em every} code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. We show that for {\\em every} code, its list decoding radius remains unchanged under $m$-wise interleaving for an integer $m$. This generalizes a recent result of Dinur et al \\cite{DGKS}, who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). Using the notion of generalized Hamming weights, we give better list size bounds for {\\em both} tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we reduce the task of boundi...

  2. Decoding the Disciplines as a Hermeneutic Practice

    Science.gov (United States)

    Yeo, Michelle

    2017-01-01

    This chapter argues that expert practice is an inquiry that surfaces a hermeneutic relationship between theory, practice, and the world, with implications for new lines of questioning in the Decoding interview.

  3. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fo...

  4. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fol...

  5. Facial age affects emotional expression decoding

    Directory of Open Access Journals (Sweden)

    Mara eFölster

    2014-02-01

    Full Text Available Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  6. Facial age affects emotional expression decoding.

    Science.gov (United States)

    Fölster, Mara; Hess, Ursula; Werheid, Katja

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  7. Coding and decoding with dendrites.

    Science.gov (United States)

    Papoutsi, Athanasia; Kastellakis, George; Psarrou, Maria; Anastasakis, Stelios; Poirazi, Panayiota

    2014-02-01

    Since the discovery of complex, voltage dependent mechanisms in the dendrites of multiple neuron types, great effort has been devoted in search of a direct link between dendritic properties and specific neuronal functions. Over the last few years, new experimental techniques have allowed the visualization and probing of dendritic anatomy, plasticity and integrative schemes with unprecedented detail. This vast amount of information has caused a paradigm shift in the study of memory, one of the most important pursuits in Neuroscience, and calls for the development of novel theories and models that will unify the available data according to some basic principles. Traditional models of memory considered neural cells as the fundamental processing units in the brain. Recent studies however are proposing new theories in which memory is not only formed by modifying the synaptic connections between neurons, but also by modifications of intrinsic and anatomical dendritic properties as well as fine tuning of the wiring diagram. In this review paper we present previous studies along with recent findings from our group that support a key role of dendrites in information processing, including the encoding and decoding of new memories, both at the single cell and the network level. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. On Decoding Irregular Tanner Codes

    CERN Document Server

    Even, Guy

    2011-01-01

    We present a new combinatorial characterization for local-optimality of a codeword in irregular Tanner codes. This characterization is a generalization of [Arora, Daskalakis, Steurer; 2009] and [Vontobel; 2010]. The main novelty in this characterization is that it is based on a conical combination of subtrees in the computation trees. These subtrees may have any degree in the local-code nodes and may have any height (even greater than the girth). We prove that local-optimality in this new characterization implies Maximum-Likelihood (ML) optimality and LP-optimality. We also show that it is possible to compute efficiently a certificate for the local-optimality of a codeword given the channel output. We apply this characterization to regular Tanner codes. We prove a lower bound on the noise threshold in channels such as BSC and AWGNC. When the noise is below this lower bound, the probability that LP decoding fails diminishes doubly exponentially in the girth of the Tanner graph. We use local optimality also to ...

  9. Sphere decoding complexity exponent for decoding full rate codes over the quasi-static MIMO channel

    CERN Document Server

    Jalden, Joakim

    2011-01-01

    In the setting of quasi-static multiple-input multiple-output (MIMO) channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic division algebra (CDA) based codes -- the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity exponent is shown to take a particularly...

  10. Linear-programming Decoding of Non-binary Linear Codes

    CERN Document Server

    Flanagan, Mark F; Byrne, Eimear; Greferath, Marcus

    2007-01-01

    We develop a framework for linear-programming (LP) decoding of non-binary linear codes over rings. We prove that the resulting LP decoder has the `maximum likelihood certificate' property, and we show that the decoder output is the lowest cost pseudocodeword. Equivalence between pseudocodewords of the linear program and pseudocodewords of graph covers is proved. LP decoding performance is illustrated for the (11,6,5) ternary Golay code with ternary PSK modulation over AWGN, and in this case it is shown that the LP decoder performance is comparable to codeword-error-rate-optimum hard-decision based decoding.

  11. FPGA implementation of low complexity LDPC iterative decoder

    Science.gov (United States)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  12. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  13. Error Exponents of Optimum Decoding for the Interference Channel

    CERN Document Server

    Etkin, Raul; Ordentlich, Erik

    2008-01-01

    Exponential error bounds for the finite-alphabet interference channel (IFC) with two transmitter-receiver pairs, are investigated under the random coding regime. Our focus is on optimum decoding, as opposed to heuristic decoding rules that have been used in previous works, like joint typicality decoding, decoding based on interference cancellation, and decoding that considers the interference as additional noise. Indeed, the fact that the actual interfering signal is a codeword and not an i.i.d. noise process complicates the application of conventional techniques to the performance analysis of the optimum decoder. Using analytical tools rooted in statistical physics, we derive a single letter expression for error exponents achievable under optimum decoding and demonstrate strict improvement over error exponents obtainable using suboptimal decoding rules, but which are amenable to more conventional analysis.

  14. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  15. VLSI Design of a Turbo Decoder

    Science.gov (United States)

    Fang, Wai-Chi

    2007-01-01

    A very-large-scale-integrated-circuit (VLSI) turbo decoder has been designed to serve as a compact, high-throughput, low-power, lightweight decoder core of a receiver in a data-communication system. In a typical contemplated application, such a decoder core would be part of a single integrated circuit that would include the rest of the receiver circuitry and possibly some or all of the transmitter circuitry, all designed and fabricated together according to an advanced communication-system-on-a-chip design concept. Turbo codes are forward-error-correction (FEC) codes. Relative to older FEC codes, turbo codes enable communication at lower signal-to-noise ratios and offer greater coding gain. In addition, turbo codes can be implemented by relatively simple hardware. Therefore, turbo codes have been adopted as standard for some advanced broadband communication systems.

  16. Online Testable Decoder using Reversible Logic

    Directory of Open Access Journals (Sweden)

    Hemalatha. K. N. Manjula B. B. Girija. S

    2012-02-01

    Full Text Available The project proposes to design and test 2 to 4 reversible Decoder circuit with arbitrary number of gates to an online testable reversible one and is independent of the type of reversible gate used. The constructed circuit can detect any single bit errors and to convert a decoder circuit that is designed by reversible gates to an online testable reversible decoder circuit. Conventional digital circuits dissipate a significant amount of energy because bits of information are erased during the logic operations. Thus if logic gates are designed such that the information bits are not destroyed, the power consumption can be reduced. The information bits are not lost in case of a reversible computation. Reversible logic can be used to implement any Boolean logic function.

  17. Neuroprosthetic Decoder Training as Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2016-05-01

    Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  18. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  19. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  20. LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing

    CERN Document Server

    Dimakis, Alexandros G

    2009-01-01

    This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.

  1. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  2. Bounds on List Decoding Gabidulin Codes

    CERN Document Server

    Wachter-Zeh, Antonia

    2012-01-01

    An open question about Gabidulin codes is whether polynomial-time list decoding beyond half the minimum distance is possible or not. In this contribution, we give a lower and an upper bound on the list size, i.e., the number of codewords in a ball around the received word. The lower bound shows that if the radius of this ball is greater than the Johnson radius, this list size can be exponential and hence, no polynomial-time list decoding is possible. The upper bound on the list size uses subspace properties.

  3. MAP decoding of variable length codes over noisy channels

    Science.gov (United States)

    Yao, Lei; Cao, Lei; Chen, Chang Wen

    2005-10-01

    In this paper, we discuss the maximum a-posteriori probability (MAP) decoding of variable length codes(VLCs) and propose a novel decoding scheme for the Huffman VLC coded data in the presence of noise. First, we provide some simulation results of VLC MAP decoding and highlight some features that have not been discussed yet in existing work. We will show that the improvement of MAP decoding over the conventional VLC decoding comes mostly from the memory information in the source and give some observations regarding the advantage of soft VLC MAP decoding over hard VLC MAP decoding when AWGN channel is considered. Second, with the recognition that the difficulty in VLC MAP decoding is the lack of synchronization between the symbol sequence and the coded bit sequence, which makes the parsing from the latter to the former extremely complex, we propose a new MAP decoding algorithm by integrating the information of self-synchronization strings (SSSs), one important feature of the codeword structure, into the conventional MAP decoding. A consistent performance improvement and decoding complexity reduction over the conventional VLC MAP decoding can be achieved with the new scheme.

  4. TC81201F MPEG2 decoder LSI; MPEG2 decoder LSI TC81201F

    Energy Technology Data Exchange (ETDEWEB)

    Kitagaki, K. [Toshiba Corp., Tokyo (Japan)

    1996-04-01

    The moving picture expert group 2 (MPEG2) decoder LSI series have been developed, in order to meet needs for diversifying multi-media systems. MPEG2 is an international standard for coding moving pictures, capable of compressing large quantities of moving picture data. Therefore, the decoder LSI for the MPEG2 signals is also a key device to realize the multi-media systems. The system needs are diversifying, as seen in different audio codes for DVD`s and digital satellite broadcasting systems (DBS`s). The company has developed, based on decoder LSI T9556 announced in 1994, TC81200F for mass production, TC81201F optimized for the DVD system and TC81211F as the one-chip device for MPEG1 decoder. Chip cost and system cost of TC81201F are reduced by optimizing functions and circuits, and by reducing external memories. 4 refs., 4 figs., 1 tab.

  5. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...

  6. Older Adults Have Difficulty in Decoding Sarcasm

    Science.gov (United States)

    Phillips, Louise H.; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley

    2015-01-01

    Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are…

  7. A chemical system that mimics decoding operations.

    Science.gov (United States)

    Giansante, Carlo; Ceroni, Paola; Venturi, Margherita; Sakamoto, Junji; Schlüter, A Dieter

    2009-02-23

    The chemical information stored in equilibrium mixtures of molecular species is larger than the sum of information carried by the individual molecules. Protonation equilibria in dilute dichloromethane solution of a shape-persistent macrocycle bearing two 2,2'-bipyridine units and two Coumarin 2 moieties (see figure) can be exploited to mimic decoding operations.

  8. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...

  9. BCS-18A command decoder-selector

    Science.gov (United States)

    Laping, H.

    1980-08-01

    This report describes an 18-channel command decoder-selector which operates in conjunction with an HF command receiver to allow secure and reliable radio control of high altitude balloon payloads. A detailed technical description and test results are also included.

  10. Deconstructing multivariate decoding for the study of brain function.

    Science.gov (United States)

    Hebart, Martin N; Baker, Chris I

    2017-08-04

    Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.

  11. Unsupervised adaptation of brain machine interface decoders

    Directory of Open Access Journals (Sweden)

    Tayfun eGürel

    2012-11-01

    Full Text Available The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i calibration phases interrupt the autonomous operation of the BMI and (ii between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and nonstationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.

  12. Belief propagation decoding of quantum channels by passing quantum messages

    Science.gov (United States)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  13. Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation

    Directory of Open Access Journals (Sweden)

    Thierry Lestable

    2008-05-01

    Full Text Available This paper presents a new approach to decode turbo codes using a nonbinary belief propagation decoder. The proposed approach can be decomposed into two main steps. First, a nonbinary Tanner graph representation of the turbo code is derived by clustering the binary parity-check matrix of the turbo code. Then, a group belief propagation decoder runs several iterations on the obtained nonbinary Tanner graph. We show in particular that it is necessary to add a preprocessing step on the parity-check matrix of the turbo code in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Finally, by capitalizing on the diversity which comes from the existence of distinct efficient preprocessings, we propose a new decoding strategy, called decoder diversity, that intends to take benefits from the diversity through collaborative decoding schemes.

  14. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  15. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  16. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  17. Space vehicle Viterbi decoder. [data converters, algorithms

    Science.gov (United States)

    1975-01-01

    The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.

  18. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1998-01-01

    The study has been divided into two phases. The purpose of Phase 1 of the study was to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. After selection of which specific...... separated by a sync marker and protected by error-correcting codes. We first give a survey of trends within the area of space modulation systems. We then discuss and define the interfaces and operating modes of the relevant system components. We present a list of system configurations that we find...... potentially useful.Algorithms for frame synchronization are described and analyzed. Further, the high level architecture of units that contain frame synchronization and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder...

  19. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also......, the systems use some form of QPSK modulation and transmit data in frames separated by a sync marker and protected by error-correcting codes. We first give a survey of trends within the area of space modulation systems. We then discuss and define the interfaces and operating modes of the relevant system...... components. Node synchronization performed within a Viterbi decoder is discussed, and algorithms for frame synchronization are described and analyzed. We present a list of system configurations that we find potentially useful. Further, the high level architecture of units that contain frame synchronization...

  20. Hardware Implementation of Serially Concatenated PPM Decoder

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in

  1. Olfactory Decoding Method Using Neural Spike Signals

    Institute of Scientific and Technical Information of China (English)

    Kyung-jin YOU; Hyun-chool SHIN

    2010-01-01

    This paper presents a travel method for inferring the odor based on naval activities observed from rats'main olfactory bulbs.Mufti-channel extmcellular single unit recordings are done by microwire electrodes(Tungsten,50μm,32 channels)innplanted in the mitral/tufted cell layers of the main olfactory bulb of the anesthetized rats to obtain neural responses to various odors.Neural responses as a key feature are measured by subtraction firing rates before stimulus from after.For odor irderenoe,a decoding method is developed based on the ML estimation.The results show that the average decoding acauacy is about 100.0%,96.0%,and 80.0% with three rats,respectively.This wait has profound implications for a novel brain-madune interface system far odor inference.

  2. Simplified Digital Subband Coders And Decoders

    Science.gov (United States)

    Glover, Daniel R.

    1994-01-01

    Simplified digital subband coders and decoders developed for use in converting digitized samples of source signals into compressed and encoded forms that maintain integrity of source signals while enabling transmission at low data rates. Examples of coding methods used in subbands include coarse quantization in high-frequency subbands, differential coding, predictive coding, vector quantization, and entropy or statistical coding. Encoders simpler, less expensive and operate rapidly enough to process video signals.

  3. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial......, and a reduction of the problem of factoring the Q-polynomial to the problem of factoring a univariate polynomial over a large finite field....

  4. DSP Specific Optimized Implementation of Viterbi Decoder

    Directory of Open Access Journals (Sweden)

    Yame Asfia

    2010-04-01

    Full Text Available Due to the rapid changing and flexibility of Wireless Communication protocols, there is a desire to move from hardware to software/firmware implementation in DSPs. High data rate requirements suggest highly optimized firmware implementation in terms of execution speed and high memory requirements. This paper suggests optimization levels for the implementation of a viable Viterbi decoding algorithm (rate ½ on a commercial off-the-shelf DSP.

  5. Kernel Temporal Differences for Neural Decoding

    Directory of Open Access Journals (Sweden)

    Jihye Bae

    2015-01-01

    Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.

  6. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  7. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  8. Performance Analysis of Viterbi Decoder for Wireless Applications

    Directory of Open Access Journals (Sweden)

    G.Sivasankar

    2014-07-01

    Full Text Available Viterbi decoder is employed in wireless communication to decode the convolutional codes; those codes are used in every robust digital communication systems. Convolutional encoding and viterbi decoding is a powerful method for forward error correction. This paper deals with synthesis and implementation of viterbi decoder with a constraint length of three as well as seven and the code rate of ½ in FPGA (Field Programmable Gate Array. The performance of viterbi decoder is analyzed in terms of resource utilization. The design of viterbi decoder is simulated using Verilog HDL. It is synthesized and implemented using Xilinx 9.1ise and Spartan 3E Kit. It is compatible with many common standards such as 3GPP, IEEE 802.16 and LTE.

  9. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    Directory of Open Access Journals (Sweden)

    Jun Jin Kong

    2003-12-01

    Full Text Available We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS units as the number of trellis states. By replacing each delay (or storage element in state metrics memory (or path metrics memory and path memory (or survival memory with I delays, interleaved Viterbi decoder is obtained where I is the interleaving degree. The decoding speed of this decoder architecture is as fast as the operating clock speed. The latency of proposed interleaved Viterbi decoder is “decoding depth (DD × interleaving degree (I+ extra delays (A,” which increases linearly with the interleaving degree I.

  10. A Modified max-log-MAP Decoding Algorithm for Turbo Decoding

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Turbo decoding is iterative decoding, and the MAP algorithm is optimal in terms of performance in Turbo decoding. The log-MAP algorithm is the MAP executed in the logarithmic domain, so it is also optimal. Both the MAP and the log-MAP algorithm are complicated for implementation. The max-log-MAP algorithm is derived from the log-MAP with approximation, which is simply compared with the log-MAP algorithm but is suboptimal in terms of performance. A modified max-log-MAP algorithm is presented in this paper, based on the Taylor series of logarithm and exponent. Analysis and simulation results show that the modified max-log-MAP algorithm outperforms the max-log-MAP algorithm with almost the same complexity.

  11. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  12. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  13. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Chen Huijun

    2008-01-01

    Full Text Available Abstract We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  14. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Huijun Chen

    2008-07-01

    Full Text Available We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  15. Grasp movement decoding from premotor and parietal cortex.

    Science.gov (United States)

    Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg

    2011-10-05

    Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.

  16. Efficient Decoding of Partial Unit Memory Codes of Arbitrary Rate

    CERN Document Server

    Wachter-Zeh, Antonia; Bossert, Martin

    2012-01-01

    Partial Unit Memory (PUM) codes are a special class of convolutional codes, which are often constructed by means of block codes. Decoding of PUM codes may take advantage of existing decoders for the block code. The Dettmar--Sorger algorithm is an efficient decoding algorithm for PUM codes, but allows only low code rates. The same restriction holds for several known PUM code constructions. In this paper, an arbitrary-rate construction, the analysis of its distance parameters and a generalized decoding algorithm for PUM codes of arbitrary rate are provided. The correctness of the algorithm is proven and it is shown that its complexity is cubic in the length.

  17. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde;

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...

  18. A primer on equalization, decoding and non-iterative joint equalization and decoding

    Science.gov (United States)

    Myburgh, Hermanus C.; Olivier, Jan C.

    2013-12-01

    In this article, a general model for non-iterative joint equalization and decoding is systematically derived for use in systems transmitting convolutionally encoded BPSK-modulated information through a multipath channel, with and without interleaving. Optimal equalization and decoding are discussed first, by presenting the maximum likelihood sequence estimation and maximum a posteriori probability algorithms and relating them to equalization in single-carrier channels with memory, and to the decoding of convolutional codes. The non-iterative joint equalizer/decoder (NI-JED) is then derived for the case where no interleaver is used, as well as for the case when block interleavers of varying depths are used, and complexity analyses are performed in each case. Simulation results are performed to compare the performance of the NI-JED to that of a conventional turbo equalizer (CTE), and it is shown that the NI-JED outperforms the CTE, although at much higher computational cost. This article serves to explain the state-of-the-art to students and professionals in the field of wireless communication systems, presenting these fundamental topics clearly and concisely.

  19. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  20. GLRT-Optimal Noncoherent Lattice Decoding

    CERN Document Server

    Ryan, Daniel J; Clarkson, I Vaughan L

    2007-01-01

    This paper presents new low-complexity lattice-decoding algorithms for noncoherent block detection of QAM and PAM signals over complex-valued fading channels. The algorithms are optimal in terms of the generalized likelihood ratio test (GLRT). The computational complexity is polynomial in the block length; making GLRT-optimal noncoherent detection feasible for implementation. We also provide even lower complexity suboptimal algorithms. Simulations show that the suboptimal algorithms have performance indistinguishable from the optimal algorithms. Finally, we consider block based transmission, and propose to use noncoherent detection as an alternative to pilot assisted transmission (PAT). The new technique is shown to outperform PAT.

  1. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  2. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  3. Interim Manual for the DST: Decoding Skills Test.

    Science.gov (United States)

    Richardson, Ellis; And Others

    The Decoding Skills Test (DST) was developed to provide a detailed measurement of decoding skills which could be used in research on developmental dyslexia. Another purpose of the test is to provide a diagnostic-prescriptive instrument to be used in the evaluation of, and program planning for, children needing remedial reading. The test is…

  4. A VLSI design for a trace-back Viterbi decoder

    Science.gov (United States)

    Truong, T. K.; Shih, Ming-Tang; Reed, Irving S.; Satorius, E. H.

    1992-01-01

    A systolic Viterbi decoder for convolutional codes is developed which uses the trace-back method to reduce the amount of data needed to be stored in registers. It is shown that this new algorithm requires a smaller chip size and achieves a faster decoding time than other existing methods.

  5. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  6. Socialization Processes in Encoding and Decoding: Learning Effective Nonverbal Behavior.

    Science.gov (United States)

    Feldman, Robert S.; Coats, Erik

    This study examined the relationship of nonverbal encoding and decoding skills to the level of exposure to television. Subjects were children in second through sixth grade. Three nonverbal skills (decoding, spontaneous encoding, and posed encoding) were assessed for each of five emotions: anger, disgust, fear or surprise, happiness, and sadness.…

  7. Decoding Information in the Human Hippocampus: A User's Guide

    Science.gov (United States)

    Chadwick, Martin J.; Bonnici, Heidi M.; Maguire, Eleanor A.

    2012-01-01

    Multi-voxel pattern analysis (MVPA), or "decoding", of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded,…

  8. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  9. Building Bridges from the Decoding Interview to Teaching Practice

    Science.gov (United States)

    Pettit, Jennifer; Rathburn, Melanie; Calvert, Victoria; Lexier, Roberta; Underwood, Margot; Gleeson, Judy; Dean, Yasmin

    2017-01-01

    This chapter describes a multidisciplinary faculty self-study about reciprocity in service-learning. The study began with each coauthor participating in a Decoding interview. We describe how Decoding combined with collaborative self-study had a positive impact on our teaching practice.

  10. Decoding Technique of Concatenated Hadamard Codes and Its Performance

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The decoding technique of concatenated Hadamard codes and its performance are studied. Efficient softin-soft-out decoding algorithms based on the fast Hadamard transform are developed. Performance required by CDMA mobile or PCS speech services, e.g. , BER= 10-3, can be achieved at Eb/No = 0.9 dB using short interleaving length of 192 bits.

  11. Latent state-space models for neural decoding.

    Science.gov (United States)

    Aghagolzadeh, Mehdi; Truccolo, Wilson

    2014-01-01

    Ensembles of single-neurons in motor cortex can show strong low-dimensional collective dynamics. In this study, we explore an approach where neural decoding is applied to estimated low-dimensional dynamics instead of to the full recorded neuronal population. A latent state-space model (SSM) approach is used to estimate the low-dimensional neural dynamics from the measured spiking activity in population of neurons. A second state-space model representation is then used to decode kinematics, via a Kalman filter, from the estimated low-dimensional dynamics. The latent SSM-based decoding approach is illustrated on neuronal activity recorded from primary motor cortex in a monkey performing naturalistic 3-D reach and grasp movements. Our analysis show that 3-D reach decoding performance based on estimated low-dimensional dynamics is comparable to the decoding performance based on the full recorded neuronal population.

  12. An efficient VLSI implementation of H.264/AVC entropy decoder

    Institute of Scientific and Technical Information of China (English)

    Jongsik; PARK; Jeonhak; MOON; Seongsoo; LEE

    2010-01-01

    <正>This paper proposes an efficient H.264/AVC entropy decoder.It requires no ROM/RAM fabrication process that decreases fabrication cost and increases operation speed.It was achieved by optimizing lookup tables and internal buffers,which significantly improves area,speed,and power.The proposed entropy decoder does not exploit embedded processor for bitstream manipulation, which also improves area,speed,and power.Its gate counts and maximum operation frequency are 77515 gates and 175MHz in 0.18um fabrication process,respectively.The proposed entropy decoder needs 2303 cycles in average for one macroblock decoding.It can run at 28MHz to meet the real-time processing requirement for CIF format video decoding on mobile applications.

  13. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    EEG based brain state decoding has numerous applications. State of the art decoding is based on processing of the multivariate sensor space signal, however evidence is mounting that EEG source reconstruction can assist decoding. EEG source imaging leads to high-dimensional representations...... of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...... imaging does not lead to an improved decoding. We design a distributed pipeline in which the classifier has access to brain wide features which in turn does lead to a 15% reduction in the error rate using source space features. Hence, our work presents supporting evidence for the hypothesis that source...

  14. FPGA Prototyping of RNN Decoder for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Salcic Zoran

    2006-01-01

    Full Text Available This paper presents prototyping of a recurrent type neural network (RNN convolutional decoder using system-level design specification and design flow that enables easy mapping to the target FPGA architecture. Implementation and the performance measurement results have shown that an RNN decoder for hard-decision decoding coupled with a simple hard-limiting neuron activation function results in a very low complexity, which easily fits into standard Altera FPGA. Moreover, the design methodology allowed modeling of complete testbed for prototyping RNN decoders in simulation and real-time environment (same FPGA, thus enabling evaluation of BER performance characteristics of the decoder for various conditions of communication channel in real time.

  15. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  16. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  17. Multilevel Decoders Surpassing Belief Propagation on the Binary Symmetric Channel

    CERN Document Server

    Planjery, Shiva Kumar; Chilappagari, Shashi Kiran; Vasić, Bane

    2010-01-01

    In this paper, we propose a new class of quantized message-passing decoders for LDPC codes over the BSC. The messages take values (or levels) from a finite set. The update rules do not mimic belief propagation but instead are derived using the knowledge of trapping sets. We show that the update rules can be derived to correct certain error patterns that are uncorrectable by algorithms such as BP and min-sum. In some cases even with a small message set, these decoders can guarantee correction of a higher number of errors than BP and min-sum. We provide particularly good 3-bit decoders for 3-left-regular LDPC codes. They significantly outperform the BP and min-sum decoders, but more importantly, they achieve this at only a fraction of the complexity of the BP and min-sum decoders.

  18. Improved decoding of limb-state feedback from natural sensors.

    Science.gov (United States)

    Wagenaar, J B; Ventura, V; Weber, D J

    2009-01-01

    Limb state feedback is of great importance for achieving stable and adaptive control of FES neuroprostheses. A natural way to determine limb state is to measure and decode the activity of primary afferent neurons in the limb. The feasibility of doing so has been demonstrated by [1] and [2]. Despite positive results, some drawbacks in these works are associated with the application of reverse regression techniques for decoding the afferent neuronal signals. Decoding methods that are based on direct regression are now favored over reverse regression for decoding neural responses in higher regions in the central nervous system [3]. In this paper, we apply a direct regression approach to decode the movement of the hind limb of a cat from a population of primary afferent neurons. We show that this approach is more principled, more efficient, and more generalizable than reverse regression.

  19. Lattice Sequential Decoder for Coded MIMO Channel: Performance and Complexity Analysis

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    In this paper, the performance limit of lattice sequential decoder for lattice space-time coded MIMO channel is analysed. We determine the rates achievable by lattice coding and sequential decoding applied to such channel. The diversity-multiplexing tradeoff (DMT) under lattice sequential decoding is derived as a function of its parameter---the bias term. The bias parameter is critical for controlling the amount of computations required at the decoding stage. Achieving low decoding complexity requires increasing the value of the bias term. However, this is done at the expense of losing the optimal tradeoff of the channel. We show how such a decoder can bridge the gap between lattice decoder and low complexity decoders. Moreover, the computational complexity of lattice sequential decoder is analysed. Specifically, we derive the tail distribution of the decoder's computational complexity in the high signal-to-noise ratio regime. Similar to the conventional sequential decoder used in discrete memoryless channel,...

  20. Decoding suprathreshold stochastic resonance with optimal weights

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Liyan, E-mail: xuliyan@qdu.edu.cn [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Vladusich, Tony [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Duan, Fabing [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Gunn, Lachlan J.; Abbott, Derek [Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia); McDonnell, Mark D. [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia)

    2015-10-09

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible.

  1. Implementation of Huffman Decoder on Fpga

    Directory of Open Access Journals (Sweden)

    Safia Amir Dahri

    2016-01-01

    Full Text Available Lossless data compression algorithm is most widely used algorithm in data transmission, reception and storage systems in order to increase data rate, speed and save lots of space on storage devices. Now-a-days, different algorithms are implemented in hardware to achieve benefits of hardware realizations. Hardware implementation of algorithms, digital signal processing algorithms and filter realization is done on programmable devices i.e. FPGA. In lossless data compression algorithms, Huffman algorithm is most widely used because of its variable length coding features and many other benefits. Huffman algorithms are used in many applications in software form, e.g. Zip and Unzip, communication, etc. In this paper, Huffman algorithm is implemented on Xilinx Spartan 3E board. This FPGA is programmed by Xilinx tool, Xilinx ISE 8.2i. The program is written in VHDL and text data is decoded by a Huffman algorithm on Hardware board which was previously encoded by Huffman algorithm. In order to visualize the output clearly in waveforms, the same code is simulated on ModelSim v6.4. Huffman decoder is also implemented in the MATLAB for verification of operation. The FPGA is a configurable device which is more efficient in all aspects. Text application, image processing, video streaming and in many other applications Huffman algorithms are implemented.

  2. SDRAM bus schedule of HDTV video decoder

    Science.gov (United States)

    Wang, Hui; He, Yan L.; Yu, Lu

    2001-12-01

    In this paper, a time division multiplexed task scheduling (TDM) is designed for HDTV video decoder is proposed. There are three tasks: to fetch decoded data from SDRAM for displaying (DIS), read the reference data from SDRAM for motion compensating (REF) and write the motion compensated data back to SDRAM (WB) on the bus. The proposed schedule is based on the novel 4 banks interlaced SDRAM storage structure which results in less overhead on read/write time. Two SDRAM of 64M bits (4Bank×512K×32bit) are used. Compared with two banks, the four banks storage strategy read/write data with 45% less time. Therefore the process data rates for those three tasks are reduced. TDM is developed by round robin scheduling and fixed slot allocating. There are both MB slot and task slot. As a result the conflicts on bus are avoided, and the buffer size is reduced 48% compared with the priority bus scheduling. Moreover, there is a compacted bus schedule for the worst case of stuffing owning to the reduced executing time on tasks. The size of buffer is reduced and the control logic is simplified.

  3. The Differential Contributions of Auditory-Verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor Decoders

    Science.gov (United States)

    Squires, Katie Ellen

    2013-01-01

    This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…

  4. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  5. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  6. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  7. Approximate Decoding Approaches for Network Coded Correlated Data

    CERN Document Server

    Park, Hyunggon; Frossard, Pascal

    2011-01-01

    This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad-hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth bottlenecks. We first show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the perfo...

  8. On Multiple Decoding Attempts for Reed-Solomon Codes

    CERN Document Server

    Nguyen, Phong S; Narayanan, Krishna R

    2010-01-01

    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on the idea of using multiple trials of a simple RS decoding algorithm in combination with successively erasing or flipping a set of symbols or bits in each trial. In this paper, we present an framework based on rate-distortion (RD) theory to analyze such multiple-decoding algorithms for RS codes. By defining an appropriate distortion measure between an error pattern and an erasure pattern, it is shown that, for a single errors-and-erasures decoding trial, the condition for successful decoding is equivalent to the condition that the distortion is smaller than a fixed threshold. Finding the best set of erasure patterns for multiple decoding trials then turns out to be a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. We also consider an a...

  9. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  10. Decoding Generalized Concatenated Codes Using Interleaved Reed-Solomon Codes

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor

    2008-01-01

    Generalized Concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code. As outer codes, we assume the most frequently used Reed-Solomon codes; as inner code, we assume some linear block code which can be decoded up to half its minimum distance. Decoding up to half the minimum distance of Generalized Concatenated codes is classically achieved by the Blokh-Zyablov-Dumer algorithm, which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error/erasure decoder with a varying number of erasures determined by a set of pre-calculated thresholds. In this paper, a modified version of the Blokh-Zyablov-Dumer algorithm is proposed, which exploits the fact that a number of outer Reed-Solomon codes with average minimum distance d can be grouped into one single Interleaved Reed-Solomon code which can be decoded beyond d/2. This allows to skip a number of decoding iterations on the one...

  11. Coding and Decoding for the Dynamic Decode and Forward Relay Protocol

    CERN Document Server

    Kumar, K Raj

    2008-01-01

    We study the Dynamic Decode and Forward (DDF) protocol for a single half-duplex relay, single-antenna channel with quasi-static fading. The DDF protocol is well-known and has been analyzed in terms of the Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We characterize the finite block length DMT and give new explicit code constructions. The finite block length analysis illuminates a few key aspects that have been neglected in the previous literature: 1) we show that one dominating cause of degradation with respect to the infinite block length regime is the event of decoding error at the relay; 2) we explicitly take into account the fact that the destination does not generally know a priori the relay decision time at which the relay switches from listening to transmit mode. Both the above problems can be tackled by a careful design of the decoding algorithm. In particular, we introduce a decision rejection criterion at the relay based on Forney's decision rule (a variant of the Neyman...

  12. Decoding the mechanisms of Antikythera astronomical device

    CERN Document Server

    Lin, Jian-Liang

    2016-01-01

    This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...

  13. Interference Alignment for Clustered Multicell Joint Decoding

    CERN Document Server

    Chatzinotas, Symeon

    2010-01-01

    Multicell joint processing has been proven to be very efficient in overcoming the interference-limited nature of the cellular paradigm. However, for reasons of practical implementation global multicell joint decoding is not feasible and thus clusters of cooperating Base Stations have to be considered. In this context, intercluster interference has to be mitigated in order to harvest the full potential of multicell joint processing. In this paper, four scenarios of intercluster interference are investigated, namely a) global multicell joint processing, b) interference alignment, c) resource division multiple access and d) cochannel interference allowance. Each scenario is modelled and analyzed using the per-cell ergodic sum-rate capacity as a figure of merit. In this process, a number of theorems are derived for analytically expressing the asymptotic eigenvalue distributions of the channel covariance matrices. The analysis is based on principles from Free Probability theory and especially properties in the R a...

  14. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  15. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  16. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  17. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  18. Turbo decoder architecture for beyond-4G applications

    CERN Document Server

    Wong, Cheng-Chi

    2013-01-01

    This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec

  19. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  20. Analysis and Design of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2012-01-01

    Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results...... message-passing decoders. Finally, it is shown that errors on cycles consisting only of degree two and three variable nodes cannot be corrected and a necessary and sufficient condition for the existence of a cycle-free subgraph is derived....

  1. Min-Max decoding for non binary LDPC codes

    CERN Document Server

    Savin, Valentin

    2008-01-01

    Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced complexity implementation called selective implementation, which makes the Min-Max decoding very attractive for practical purposes.

  2. Fast-Group-Decodable STBCs via Codes over GF(4)

    CERN Document Server

    Prasad, N Lakshmi

    2010-01-01

    In this paper we construct low decoding complexity STBCs by using the Pauli matrices as linear dispersion matrices. In this case the Hurwitz-Radon orthogonality condition is shown to be easily checked by transferring the problem to $\\mathbb{F}_4$ domain. The problem of constructing low decoding complexity STBCs is shown to be equivalent to finding certain codes over $\\mathbb{F}_4$. It is shown that almost all known low complexity STBCs can be obtained by this approach. New codes are given that have the least known decoding complexity in particular ranges of rate.

  3. Interpolating and filtering decoding algorithm for convolution codes

    Directory of Open Access Journals (Sweden)

    O. O. Shpylka

    2010-01-01

    Full Text Available There has been synthesized interpolating and filtering decoding algorithm for convolution codes on maximum of a posteriori probability criterion, in which combined filtering coder state and interpolation of information signs on sliding interval are processed

  4. Evolution of Neural Computations: Mantis Shrimp and Human Color Decoding

    Directory of Open Access Journals (Sweden)

    Qasim Zaidi

    2014-10-01

    Full Text Available Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  5. Evolution of neural computations: Mantis shrimp and human color decoding.

    Science.gov (United States)

    Zaidi, Qasim; Marshall, Justin; Thoen, Hanne; Conway, Bevil R

    2014-01-01

    Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  6. Construction and decoding of a class of algebraic geometry codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1989-01-01

    A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result is a decod...... is a decoding algorithm which turns out to be a generalization of the Peterson algorithm for decoding BCH decoder codes......A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result...

  7. Decoding sound level in the marmoset primary auditory cortex.

    Science.gov (United States)

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-07-12

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. Copyright © 2016, Journal of Neurophysiology.

  8. Multiresolutional encoding and decoding in embedded image and video coders

    Science.gov (United States)

    Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.

    1998-07-01

    We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.

  9. On Complexity, Energy- and Implementation-Efficiency of Channel Decoders

    CERN Document Server

    Kienle, Frank; Meyr, Heinrich

    2010-01-01

    Future wireless communication systems require efficient and flexible baseband receivers. Meaningful efficiency metrics are key for design space exploration to quantify the algorithmic and the implementation complexity of a receiver. Most of the current established efficiency metrics are based on counting operations, thus neglecting important issues like data and storage complexity. In this paper we introduce suitable energy and area efficiency metrics which resolve the afore-mentioned disadvantages. These are decoded information bit per energy and throughput per area unit. Efficiency metrics are assessed by various implementations of turbo decoders, LDPC decoders and convolutional decoders. New exploration methodologies are presented, which permit an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs. These exploration methodologies are based on efficiency trajectories rather than a single snapshot metric as done in state-of-the-art approaches.

  10. Tracing Precept against Self-Protective Tortious Decoder

    Institute of Scientific and Technical Information of China (English)

    Jie Tian; Xin-Fang Zhang; Yi-Lin Song; Wei Xiang

    2007-01-01

    Traceability precept is a broadcast encryption technique that content suppliers can trace malicious authorized users who leak the decryption key to an unauthorized user. To protect the data from eavesdropping, the content supplier encrypts the data and broadcast the cryptograph that only its subscribers can decrypt. However, a traitor may clone his decoder and sell the pirate decoders for profits. The traitor can modify the private key and the decryption program inside the pirate decoder to avoid divulging his identity. Furthermore, some traitors may fabricate a new legal private key together that cannot be traced to the creators. So in this paper, a renewed precept is proposed to achieve both revocation at a different level of capacity in each distribution and black-box tracing against self-protective pirate decoders. The rigorous mathematical deduction shows that our algorithm possess security property.

  11. Improved List Sphere Decoder for Multiple Antenna Systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An improved list sphere decoder (ILSD) is proposed based on the conventional list sphere decoder (LSD) and the reduced-complexity maximum likelihood sphere-decoding algorithm. Unlike the conventional LSD with fixed initial radius, the ILSD adopts an adaptive radius to accelerate the list construction. Characterized by low-complexity and radius-insensitivity, the proposed algorithm makes iterative joint detection and decoding more realizable in multiple-antenna systems. Simulation results show that computational savings of ILSD over LSD are more apparent with more transmit antennas or larger constellations, and with no performance degradation. Because the complexity of the ILSD algorithm almost keeps invariant with the increasing of initial radius, the BER performance can be improved by selecting a sufficiently large radius.

  12. Low ML Decoding Complexity STBCs via Codes over GF(4)

    CERN Document Server

    Natarajan, Lakshmi Prasad

    2010-01-01

    In this paper, we give a new framework for constructing low ML decoding complexity Space-Time Block Codes (STBCs) using codes over the finite field $\\mathbb{F}_4$. Almost all known low ML decoding complexity STBCs can be obtained via this approach. New full-diversity STBCs with low ML decoding complexity and cubic shaping property are constructed, via codes over $\\mathbb{F}_4$, for number of transmit antennas \\mbox{$N=2^m$}, \\mbox{$m \\geq 1$}, and rates \\mbox{$R>1$} complex symbols per channel use. When \\mbox{$R=N$}, the new STBCs are information-lossless as well. The new class of STBCs have the least known ML decoding complexity among all the codes available in the literature for a large set of \\mbox{$(N,R)$} pairs.

  13. Decoding of visual attention from LFP signals of macaque MT.

    Science.gov (United States)

    Esghaei, Moein; Daliri, Mohammad Reza

    2014-01-01

    The local field potential (LFP) has recently been widely used in brain computer interfaces (BCI). Here we used power of LFP recorded from area MT of a macaque monkey to decode where the animal covertly attended. Support vector machines (SVM) were used to learn the pattern of power at different frequencies for attention to two possible positions. We found that LFP power at both low (<9 Hz) and high (31-120 Hz) frequencies contains sufficient information to decode the focus of attention. Highest decoding performance was found for gamma frequencies (31-120 Hz) and reached 82%. In contrast low frequencies (<9 Hz) could help the classifier reach a higher decoding performance with a smaller amount of training data. Consequently, we suggest that low frequency LFP can provide fast but coarse information regarding the focus of attention, while higher frequencies of the LFP deliver more accurate but less timely information about the focus of attention.

  14. Decoding of visual attention from LFP signals of macaque MT.

    Directory of Open Access Journals (Sweden)

    Moein Esghaei

    Full Text Available The local field potential (LFP has recently been widely used in brain computer interfaces (BCI. Here we used power of LFP recorded from area MT of a macaque monkey to decode where the animal covertly attended. Support vector machines (SVM were used to learn the pattern of power at different frequencies for attention to two possible positions. We found that LFP power at both low (<9 Hz and high (31-120 Hz frequencies contains sufficient information to decode the focus of attention. Highest decoding performance was found for gamma frequencies (31-120 Hz and reached 82%. In contrast low frequencies (<9 Hz could help the classifier reach a higher decoding performance with a smaller amount of training data. Consequently, we suggest that low frequency LFP can provide fast but coarse information regarding the focus of attention, while higher frequencies of the LFP deliver more accurate but less timely information about the focus of attention.

  15. Impact of Decoding Work within a Professional Program

    Science.gov (United States)

    Yeo, Michelle; Lafave, Mark; Westbrook, Khatija; McAllister, Jenelle; Valdez, Dennis; Eubank, Breda

    2017-01-01

    This chapter demonstrates how Decoding work can be used productively within a curriculum change process to help make design decisions based on a more nuanced understanding of student learning and the relationship of a professional program to the field.

  16. Learning from Decoding across Disciplines and within Communities of Practice

    Science.gov (United States)

    Miller-Young, Janice; Boman, Jennifer

    2017-01-01

    This final chapter synthesizes the findings and implications derived from applying the Decoding the Disciplines model across disciplines and within communities of practice. We make practical suggestions for teachers and researchers who wish to apply and extend this work.

  17. VLSI architecture for a Reed-Solomon decoder

    Science.gov (United States)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor)

    1992-01-01

    A basic single-chip building block for a Reed-Solomon (RS) decoder system is partitioned into a plurality of sections, the first of which consists of a plurality of syndrome subcells each of which contains identical standard-basis finite-field multipliers that are programmable between 10 and 8 bit operation. A desired number of basic building blocks may be assembled to provide a RS decoder of any syndrome subcell size that is programmable between 10 and 8 bit operation.

  18. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    OpenAIRE

    2003-01-01

    We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS) units as the number of trellis states. By replacing each delay (or storage) element in state metrics memory (or path metrics memory) and path memory (or survival memory) with delays, interleaved Viterbi decoder ...

  19. Evolution of neural computations: Mantis shrimp and human color decoding

    OpenAIRE

    Qasim Zaidi; Justin Marshall; Hanne Thoen; Conway, Bevil R.

    2014-01-01

    Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hie...

  20. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  1. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output......We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  2. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  3. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    Science.gov (United States)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  4. Efficient VLSI architecture of CAVLC decoder with power optimized

    Institute of Scientific and Technical Information of China (English)

    CHEN Guang-hua; HU Deng-ji; ZHANG Jin-yi; ZHENG Wei-feng; ZENG Wei-min

    2009-01-01

    This paper presents an efficient VLSI architecture of the contest-based adaptive variable length code (CAVLC) decoder with power optimized for the H.264/advanced video coding (AVC) standard. In the proposed design, according to the regularity of the codewords, the first one detector is used to solve the low efficiency and high power dissipation problem within the traditional method of table-searching. Considering the relevance of the data used in the process of runbefore's decoding,arithmetic operation is combined with finite state machine (FSM), which achieves higher decoding efficiency. According to the CAVLC decoding flow, clock gating is employed in the module level and the register level respectively, which reduces 43% of the overall dynamic power dissipation. The proposed design can decode every syntax element in one clock cycle. When the proposed design is synthesized at the clock constraint of 100 MHz, the synthesis result shows that the design costs 11 300gates under a 0.25 μm CMOS technology, which meets the demand of real time decoding in the H.264/AVC standard.

  5. Decoding Schemes for FBMC with Single-Delay STTC

    Science.gov (United States)

    Lélé, Chrislin; Le Ruyet, Didier

    2010-12-01

    Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM) with Filter-Bank-based MultiCarrier modulation (FBMC) is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP) for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO) techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC). We extend the decoding process presented earlier in the case of [InlineEquation not available: see fulltext.] transmit antennas to greater values of [InlineEquation not available: see fulltext.]. Then, for [InlineEquation not available: see fulltext.], we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  6. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Chrislin Lélé

    2010-01-01

    Full Text Available Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of Nt=2 transmit antennas to greater values of Nt. Then, for Nt≥2, we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  7. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Lélé Chrislin

    2010-01-01

    Full Text Available Abstract Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of transmit antennas to greater values of . Then, for , we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  8. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  9. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan

    2010-08-01

    While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.

  10. Iterative Decoding of Parallel Concatenated Block Codes and Coset Based MAP Decoding Algorithm for F24 Code

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-dimensional concatenation scheme for block codes is introduced, in which information symbols are interleaved and re-encoded for more than once. It provides a convenient platform to design high performance codes with flexible interleaver size.Coset based MAP soft-in/soft-out decoding algorithms are presented for the F24 code. Simulation results show that the proposed coding scheme can achieve high coding gain with flexible interleaver length and very low decoding complexity.

  11. Singer product apertures-A coded aperture system with a fast decoding algorithm

    Science.gov (United States)

    Byard, Kevin; Shutler, Paul M. E.

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  12. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  13. Efficient blind decoders for additive spread spectrum embedding based data hiding

    Science.gov (United States)

    Valizadeh, Amir; Wang, Z. Jane

    2012-12-01

    This article investigates efficient blind watermark decoding approaches for hidden messages embedded into host images, within the framework of additive spread spectrum (SS) embedding based for data hiding. We study SS embedding in both the discrete cosine transform and the discrete Fourier transform (DFT) domains. The contributions of this article are multiple-fold: first, we show that the conventional SS scheme could not be applied directly into the magnitudes of the DFT, and thus we present a modified SS scheme and the optimal maximum likelihood (ML) decoder based on the Weibull distribution is derived. Secondly, we investigate the improved spread spectrum (ISS) embedding, an improved technique of the traditional additive SS, and propose the modified ISS scheme for information hiding in the magnitudes of the DFT coefficients and the optimal ML decoders for ISS embedding are derived. We also provide thorough theoretical error probability analysis for the aforementioned decoders. Thirdly, sub-optimal decoders, including local optimum decoder (LOD), generalized maximum likelihood (GML) decoder, and linear minimum mean square error (LMMSE) decoder, are investigated to reduce the required prior information at the receiver side, and their theoretical decoding performances are derived. Based on decoding performances and the required prior information for decoding, we discuss the preferred host domain and the preferred decoder for additive SS-based data hiding under different situations. Extensive simulations are conducted to illustrate the decoding performances of the presented decoders.

  14. Statistical coding and decoding of heartbeat intervals.

    Science.gov (United States)

    Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru

    2011-01-01

    The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  15. Rate Aware Instantly Decodable Network Codes

    KAUST Repository

    Douik, Ahmed

    2016-02-26

    This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.

  16. Statistical coding and decoding of heartbeat intervals.

    Directory of Open Access Journals (Sweden)

    Fausto Lucena

    Full Text Available The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  17. fNIRS-based online deception decoding

    Science.gov (United States)

    Hu, Xiao-Su; Hong, Keum-Shik; Ge, Shuzhi Sam

    2012-04-01

    Deception involves complex neural processes in the brain. Different techniques have been used to study and understand brain mechanisms during deception. Moreover, efforts have been made to develop schemes that can detect and differentiate deception and truth-telling. In this paper, a functional near-infrared spectroscopy (fNIRS)-based online brain deception decoding framework is developed. Deploying dual-wavelength fNIRS, we interrogate 16 locations in the forehead when eight able-bodied adults perform deception and truth-telling scenarios separately. By combining preprocessed oxy-hemoglobin and deoxy-hemoglobin signals, we develop subject-specific classifiers using the support vector machine. Deception and truth-telling states are classified correctly in seven out of eight subjects. A control experiment is also conducted to verify the deception-related hemodynamic response. The average classification accuracy is over 83.44% from these seven subjects. The obtained result suggests that the applicability of fNIRS as a brain imaging technique for online deception detection is very promising.

  18. Decoding reality the universe as quantum information

    CERN Document Server

    Vedral, Vlatko

    2010-01-01

    In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...

  19. Fast mental states decoding in mixed reality.

    Science.gov (United States)

    De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea

    2014-01-01

    The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.

  20. Infinity-Norm Sphere-Decoding

    CERN Document Server

    Seethaler, Dominik

    2008-01-01

    The most promising approaches for efficient detection in multiple-input multiple-output (MIMO) wireless systems are based on sphere-decoding (SD). The conventional (and optimum) norm that is used to conduct the tree traversal step in SD is the l-two norm. It was, however, recently shown that using the l-infinity norm instead significantly reduces the VLSI implementation complexity of SD at only a marginal performance loss. These savings are due to a reduction in the length of the critical path and the silicon area of the circuit, but also, as observed previously through simulation results, a consequence of a reduction in the computational (algorithmic) complexity. The aim of this paper is an analytical performance and computational complexity analysis of l-infinity norm SD. For i.i.d. Rayleigh fading MIMO channels, we show that l-infinity norm SD achieves full diversity order with an asymptotic SNR gap, compared to l-two norm SD, that increases at most linearly in the number of receive antennas. Moreover, we ...

  1. Cognitive Wyner Networks with Clustered Decoding

    CERN Document Server

    Lapidoth, Amos; Shamai, Shlomo; Wigger, Michele

    2012-01-01

    We study the uplink of linear cellular models featuring short range inter-cell interference. This means, we study $K$-transmitter/$K$-receiver interference networks where the transmitters lie on a line and the receivers on a parallel line; each receiver opposite its corresponding transmitter. We assume short-range interference by which we mean that the signal sent by a given transmitter is only interfered either by the signal sent by its left neighbor (we refer to this setup as the asymmetric network) or by the signals sent by both its neighbors (we refer to this setup as the symmetric network). We assume that each transmitter has side-information consisting of the messages of the $t_\\ell$ transmitters to its left and the $t_r$ transmitters to its right, and that each receiver can decode its message using the signals received at its own antenna, at the $r_\\ell$ receiving antennas to its left, and at the $r_r$ receiving antennas to its right. We provide upper and lower bounds on the multiplexing gain of these ...

  2. Why Hawking Radiation Cannot Be Decoded

    CERN Document Server

    Ong, Yen Chin; Chen, Pisin

    2014-01-01

    One of the great difficulties in the theory of black hole evaporation is that the most decisive phenomena tend to occur when the black hole is extremely hot: that is, when the physics is most poorly understood. Fortunately, a crucial step in the Harlow-Hayden approach to the firewall paradox, concerning the time available for decoding of Hawking radiation emanating from charged AdS black holes, can be made to work without relying on the unknown physics of black holes with extremely high temperatures; in fact, it relies on the properties of cold black holes. Here we clarify this surprising point. The approach is based on ideas borrowed from applications of the AdS/CFT correspondence to the quark-gluon plasma. Firewalls aside, our work presents a detailed analysis of the thermodynamics and evolution of evaporating charged AdS black holes with flat event horizons. We show that, in one way or another, these black holes are always eventually destroyed in a time which, while long by normal standards, is short relat...

  3. Fast mental states decoding in mixed reality.

    Directory of Open Access Journals (Sweden)

    Daniele eDe Massari

    2014-11-01

    Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.

  4. PERFORMANCE OF A NEW DECODING METHOD USED IN OPEN-LOOP ALL-OPTICAL CHAOTIC COMMUNICATION SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Liu Huijie; Feng Jiuchao

    2011-01-01

    A new decoding method with decoder is used in open-loop all-optical chaotic communication system under strong injection condition.The performance of the new decoding method is numerically investigated by comparing it with the common decoding method without decoder.For new decoding method,two cases are analyzed,including whether or not the output of the decoder is adjusted by its input to receiver.The results indicate the decoding quality can be improved by adjusting for the new decoding method.Meanwhile,the injection strength of decoder can be restricted in a certain range.The adjusted new decoding method with decoder can achieve better decoding quality than decoding method without decoder when the bit rate of message is under 5 Gb/s.However,a stronger injection for receiver is needed.Moreover,the new decoding method can broaden the range of injection strength acceptable for good decoding quality.Different message encryption techniques are tested,and the result is similar to that of the common decoding method,indicative of the fact that the message encoded by using Chaotic Modulation (CM) can be best recovered by the new decoding method owning to the essence of this encryption technique.

  5. A real-time MPEG software decoder using a portable message-passing library

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  6. Reconfigurable and Parallelized Network Coding Decoder for VANETs

    Directory of Open Access Journals (Sweden)

    Sunwoo Kim

    2012-01-01

    Full Text Available Network coding is a promising technique for data communications in wired and wireless networks. However, it places an additional computing overhead on the receiving node in exchange for the improved bandwidth. This paper proposes an FPGA-based reconfigurable and parallelized network coding decoder for embedded systems especially for vehicular ad hoc networks. In our design, rapid decoding process can be achieved by exploiting parallelism in the coefficient vector operations. The proposed decoder is implemented by using a modern Xilinx Virtex-5 device and its performance is evaluated considering the performance of the software decoding on various embedded processors. The performance on four different sizes of the coefficient matrix is measured and the decoding throughput of 18.3 Mbps for the size 16 × 16 and 6.5 Mbps for 128 × 128 has been achieved at the operating frequency of 64.5 MHz. Compared to the recent TEGRA 250 processor, the result obtained with128 × 128 coefficient matrix reaches up to 5.06 in terms of speedup.

  7. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  8. Combinatorial limitations of a strong form of list decoding

    CERN Document Server

    Guruswami, Venkatesan

    2012-01-01

    We prove the following results concerning the combinatorics of list decoding, motivated by the exponential gap between the known upper bound (of $O(1/\\gamma)$) and lower bound (of $\\Omega_p(\\log (1/\\gamma))$) for the list-size needed to decode up to radius $p$ with rate $\\gamma$ away from capacity, i.e., $1-\\h(p)-\\gamma$ (here $p\\in (0,1/2)$ and $\\gamma > 0$). We prove that in any binary code $C \\subseteq \\{0,1\\}^n$ of rate $1-\\h(p)-\\gamma$, there must exist a set $\\mathcal{L} \\subset C$ of $\\Omega_p(1/\\sqrt{\\gamma})$ codewords such that the average distance of the points in $\\mathcal{L}$ from their centroid is at most $pn$. In other words, there must exist $\\Omega_p(1/\\sqrt{\\gamma})$ codewords with low "average radius". The motivation for this result is that it gives a list-size lower bound for a strong notion of list decoding which has been implicitly been used in the previous negative results for list decoding. (The usual notion of list decoding corresponds to replacing {\\em average} radius by the {\\em min...

  9. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  10. Performance evaluation of H.264 decoder on different processors

    Directory of Open Access Journals (Sweden)

    H.S.Prasantha

    2010-08-01

    Full Text Available H.264/AVC (Advanced Video Coding is the newest video coding standard of the moving video coding experts group. The decoder is standardized by imposing restrictions on the bit stream and syntax, and defining the process of decoding syntax elements such that every decoder conforming to the standard will produce similar output when encoded bit stream is provided as input. It uses state of art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, real-time video conferencing, direct-broadcast TV (television, blue-ray disc, DVB (Digital video broadcast broadcast, streaming video and others. The paper proposes to port the H.264/AVC decoder on the various processors such as TI DSP (Digital signal processor, ARM (Advanced risk machines and P4 (Pentium processors. The paper also proposesto analyze and compare Video Quality Metrics for different encoded video sequences. The paper proposes to investigate the decoder performance on different processors with and without deblocking filter and compare the performance based on different video quality measures.

  11. Measuring Integrated Information from the Decoding Perspective.

    Directory of Open Access Journals (Sweden)

    Masafumi Oizumi

    2016-01-01

    Full Text Available Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information or when the system comprises independent parts (no integration. The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

  12. Measuring Integrated Information from the Decoding Perspective.

    Science.gov (United States)

    Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu

    2016-01-01

    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

  13. On the Optimality of Successive Decoding in Compress-and-Forward Relay Schemes

    CERN Document Server

    Wu, Xiugang

    2010-01-01

    In the classical compress-and-forward relay scheme developed by (Cover and El Gamal, 1979), the decoding process operates in a successive way: the destination first decodes the compressed observation of the relay, and then decodes the original message of the source. Recently, two modified compress-and-forward relay schemes were proposed, and in both of them, the destination jointly decodes the compressed observation of the relay and the original message, instead of successively. Such a modification on the decoding process was motivated by realizing that it is generally easier to decode the compressed observation jointly with the original message, and more importantly, the original message can be decoded even without completely decoding the compressed observation. However, the question remains whether this freedom of choosing a higher compression rate at the relay improves the achievable rate of the original message. It has been shown in (El Gamal and Kim, 2010) that the answer is negative in the single relay ...

  14. A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion

    Directory of Open Access Journals (Sweden)

    Johnny W. H. Kao

    2009-01-01

    Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.

  15. Map Algorithms for Decoding Linear Block codes Based on Sectionalized Trellis Diagrams

    Science.gov (United States)

    Lin, Shu

    1999-01-01

    The MAP algorithm is a trellis-based maximum a posteriori probability decoding algorithm. It is the heart of the turbo (or iterative) decoding which achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as Log-MAP and Max-Log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bi-directional and parallel MAP decodings.

  16. A Concatenated ML Decoder for ST/SFBC-OFDM Systems in Double Selective Fading Channels

    Institute of Scientific and Technical Information of China (English)

    李明齐; 张文军

    2004-01-01

    This paper presented a concatenated maximum-likelihood (ML) decoder for space-time/space-frequency block coded orthogonal frequency diversion multiplexing (ST/SFBC-OFDM) systems in double selective fading channels. The proposed decoder first detects space-time or space-frequency codeword elements separately. Then, according to the coarsely estimated codeword elements, the ML decoding is performed in a smaller constellation element set to searching final codeword. It is proved that the proposed decoder has optimal performances if and only if subchannels are constant during a codeword interval. The simulation results show that the performances of proposed decoder is close to that of the optimal ML decoder in severe Doppler and delay spread channels. However, the complexity of proposed decoder is much lower than that of the optimal ML decoder.

  17. A Discrete Time Markov Chain Model for High Throughput Bidirectional Fano Decoders

    CERN Document Server

    Xu, Ran; Morris, Kevin; Kocak, Taskin

    2010-01-01

    The bidirectional Fano algorithm (BFA) can achieve at least two times decoding throughput compared to the conventional unidirectional Fano algorithm (UFA). In this paper, bidirectional Fano decoding is examined from the queuing theory perspective. A Discrete Time Markov Chain (DTMC) is employed to model the BFA decoder with a finite input buffer. The relationship between the input data rate, the input buffer size and the clock speed of the BFA decoder is established. The DTMC based modelling can be used in designing a high throughput parallel BFA decoding system. It is shown that there is a tradeoff between the number of BFA decoders and the input buffer size, and an optimal input buffer size can be chosen to minimize the hardware complexity for a target decoding throughput in designing a high throughput parallel BFA decoding system.

  18. Multistep Linear Programming Approaches for Decoding Low-Density Parity-Check Codes

    Institute of Scientific and Technical Information of China (English)

    LIU Haiyang; MA Lianrong; CHEN Jie

    2009-01-01

    The problem of improving the performance of linear programming (LP) decoding of low-density padty-check (LDPC) codes is considered in this paper. A multistep linear programming (MLP) algorithm was developed for decoding LDPC codes that includes a slight increase in computational complexity. The MLP decoder adaptively adds new constraints which are compatible with a selected check node to refine the re-sults when an error is reported by the odginal LP decoder. The MLP decoder result is shown to have the maximum-likelihood (ML) certificate property. Simulations with moderate block length LDPC codes suggest that the MLP decoder gives better performance than both the odginal LP decoder and the conventional sum-product (SP) decoder.

  19. A Low Power Viterbi Decoder for Trellis Coded Modulation System

    Directory of Open Access Journals (Sweden)

    M. Jansi Rani

    2014-02-01

    Full Text Available Forward Error Correction (FEC schemes are an essential component of wireless communication systems. Convolutional codes are employed to implement FEC but the complexity of corresponding decoders increases exponentially according to the constraint length. Present wireless standards such as Third generation (3G systems, GSM, 802.11A, 802.16 utilize some configuration of convolutional coding. Convolutional encoding with Viterbi decoding is a powerful method for forward error correction. Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. The main aim of this project is to design FPGA based Viterbi algorithm which encrypts / decrypts the data. In this project the encryption / decryption algorithm is designed and programmed in to the FPGA.

  20. Analysis of Minimal LDPC Decoder System on a Chip Implementation

    Directory of Open Access Journals (Sweden)

    T. Palenik

    2015-09-01

    Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.

  1. Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits

    Science.gov (United States)

    Haghparast, Majid; Monfared, Asma Taheri

    2017-05-01

    Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.

  2. Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits

    Science.gov (United States)

    Haghparast, Majid; Monfared, Asma Taheri

    2017-02-01

    Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.

  3. SERS decoding of micro gold shells moving in microfluidic systems.

    Science.gov (United States)

    Lee, Saram; Joo, Segyeong; Park, Sejin; Kim, Soyoun; Kim, Hee Chan; Chung, Taek Dong

    2010-05-01

    In this study, in situ surface-enhanced Raman scattering (SERS) decoding was demonstrated in microfluidic chips using novel thin micro gold shells modified with Raman tags. The micro gold shells were fabricated using electroless gold plating on PMMA beads with diameter of 15 microm. These shells were sophisticatedly optimized to produce the maximum SERS intensity, which minimized the exposure time for quick and safe decoding. The shell surfaces produced well-defined SERS spectra even at an extremely short exposure time, 1 ms, for a single micro gold shell combined with Raman tags such as 2-naphthalenethiol and benzenethiol. The consecutive SERS spectra from a variety of combinations of Raman tags were successfully acquired from the micro gold shells moving in 25 microm deep and 75 microm wide channels on a glass microfluidic chip. The proposed functionalized micro gold shells exhibited the potential of an on-chip microfluidic SERS decoding strategy for micro suspension array.

  4. HIGH THROUGHPUT OF MAP PROCESSOR USING PIPELINE WINDOW DECODING

    Directory of Open Access Journals (Sweden)

    P. Nithya

    2012-11-01

    Full Text Available Turbo codes are one of the most efficient error correcting code which approaches the Shannon limit.The high throughput in turbo decoder can be achieved by parallelizing several soft Input Soft Output(SISOunits together.In this way,multiple SISO decoders work on the same data frame at the same values and delievers soft outputs can be split into three terms like the soft channel and a priori input and the extrinsic value.The extrinsic value is used for the next iteration.The high throughput of Max-Log-MAP processor tha supports both single Binary(SBand Double-binary(DB convolutional turbo codes.Decoding of these codes however an iterative processing is requires high computation rate and latency.Thus in order to achieve high throughput and to reduce latency by using serial processing techniques.The pipeline window(PWdecoding is introduced to support arbitrary frame sizes with high throughput.

  5. Modified Suboptimal Iterative Decoding for Regular Repeat- Accumulate Coded Signals

    Directory of Open Access Journals (Sweden)

    Muhammad Thamer Nesr

    2017-05-01

    Full Text Available In this work, two algorithms are suggested in order to improve the performance of systematic Repeat-Accumulate ( decoding. The first one is accomplished by the insertion of pilot symbols among the data stream that entering the encoder. The positions where pilots should be inserted are chosen in such a way that to improve the minimum Hamming distance and/or to reduce the error coefficients of the code. The second proposed algorithm includes the utilization of the inserted pilots to estimate scaling (correction factors. Two-dimensional correction factor was suggested in order to enhance the performance of traditional Minimum-Sum decoding of regular repeat accumulate codes. An adaptive method can be achieved for getting the correction factors by calculating the mean square difference between the values of received pilots and the a-posteriori data of bit and check node related to them which created by the minimum-sum ( decoder

  6. Joint scheduling and resource allocation for multiple video decoding tasks

    Science.gov (United States)

    Foo, Brian; van der Schaar, Mihaela

    2008-01-01

    In this paper we propose a joint resource allocation and scheduling algorithm for video decoding on a resource-constrained system. By decomposing a multimedia task into decoding jobs using quality-driven priority classes, we demonstrate using queuing theoretic analysis that significant power savings can be achieved under small video quality degradation without requiring the encoder to adapt its transmitted bitstream. Based on this scheduling algorithm, we propose an algorithm for maximizing the sum of video qualities in a multiple task environment, while minimizing system energy consumption, without requiring tasks to reveal information about their performances to the system or to other potentially exploitative applications. Importantly, we offer a method to optimize the performance of multiple video decoding tasks on an energy-constrained system, while protecting private information about the system and the applications.

  7. Ternary Tree and Memory-Efficient Huffman Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2011-01-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the encoded symbols and proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method.

  8. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  9. Sparsity-Aware Sphere Decoding: Algorithms and Complexity Analysis

    Science.gov (United States)

    Barik, Somsubhra; Vikalo, Haris

    2014-05-01

    Integer least-squares problems, concerned with solving a system of equations where the components of the unknown vector are integer-valued, arise in a wide range of applications. In many scenarios the unknown vector is sparse, i.e., a large fraction of its entries are zero. Examples include applications in wireless communications, digital fingerprinting, and array-comparative genomic hybridization systems. Sphere decoding, commonly used for solving integer least-squares problems, can utilize the knowledge about sparsity of the unknown vector to perform computationally efficient search for the solution. In this paper, we formulate and analyze the sparsity-aware sphere decoding algorithm that imposes $\\ell_0$-norm constraint on the admissible solution. Analytical expressions for the expected complexity of the algorithm for alphabets typical of sparse channel estimation and source allocation applications are derived and validated through extensive simulations. The results demonstrate superior performance and speed of sparsity-aware sphere decoder compared to the conventional sparsity-unaware sphere decoding algorithm. Moreover, variance of the complexity of the sparsity-aware sphere decoding algorithm for binary alphabets is derived. The search space of the proposed algorithm can be further reduced by imposing lower bounds on the value of the objective function. The algorithm is modified to allow for such a lower bounding technique and simulations illustrating efficacy of the method are presented. Performance of the algorithm is demonstrated in an application to sparse channel estimation, where it is shown that sparsity-aware sphere decoder performs close to theoretical lower limits.

  10. Polytope Representations for Linear-Programming Decoding of Non-Binary Linear Codes

    CERN Document Server

    Skachek, Vitaly; Byrne, Eimear; Greferath, Marcus

    2007-01-01

    In previous work, we demonstrated how decoding of a non-binary linear code could be formulated as a linear-programming problem. In this paper, we study different polytopes for use with linear-programming decoding, and show that for many classes of codes these polytopes yield a complexity advantage for decoding. These representations lead to polynomial-time decoders for a wide variety of classical non-binary linear codes.

  11. Algebraic Fast-Decodable Relay Codes for Distributed Communications

    CERN Document Server

    Hollanti, Camilla

    2012-01-01

    In this paper, fast-decodable lattice code constructions are designed for the nonorthogonal amplify-and-forward (NAF) multiple-input multiple-output (MIMO) channel. The constructions are based on different types of algebraic structures, e.g. quaternion division algebras. When satisfying certain properties, these algebras provide us with codes whose structure naturally reduces the decoding complexity. The complexity can be further reduced by shortening the block length, i.e., by considering rectangular codes called less than minimum delay (LMD) codes.

  12. New Iterated Decoding Algorithm Based on Differential Frequency Hopping System

    Institute of Scientific and Technical Information of China (English)

    LIANG Fu-lin; LUO Wei-xiong

    2005-01-01

    A new iterated decoding algorithm is proposed for differential frequency hopping (DFH) encoder concatenated with multi-frequency shift-key (MFSK) modulator. According to the character of the frequency hopping (FH) pattern trellis produced by DFH function, maximum a posteriori (MAP) probability theory is applied to realize the iterate decoding of it. Further, the initial conditions for the new iterate algorithm based on MAP algorithm are modified for better performance. Finally, the simulation result compared with that from traditional algorithms shows good anti-interference performance.

  13. Joint Estimation and Decoding of Space-Time Trellis Codes

    Directory of Open Access Journals (Sweden)

    Zhang Jianqiu

    2002-01-01

    Full Text Available We explore the possibility of using an emerging tool in statistical signal processing, sequential importance sampling (SIS, for joint estimation and decoding of space-time trellis codes (STTC. First, we provide background on SIS, and then we discuss its application to space-time trellis code (STTC systems. It is shown through simulations that SIS is suitable for joint estimation and decoding of STTC with time-varying flat-fading channels when phase ambiguity is avoided. We used a design criterion for STTCs and temporally correlated channels that combats phase ambiguity without pilot signaling. We have shown by simulations that the design is valid.

  14. Decoding Brain States Based on Magnetoencephalography From Prespecified Cortical Regions.

    Science.gov (United States)

    Zhang, Jinyin; Li, Xin; Foldes, Stephen T; Wang, Wei; Collinger, Jennifer L; Weber, Douglas J; Bagić, Anto

    2016-01-01

    Brain state decoding based on whole-head MEG has been extensively studied over the past decade. Recent MEG applications pose an emerging need of decoding brain states based on MEG signals originating from prespecified cortical regions. Toward this goal, we propose a novel region-of-interest-constrained discriminant analysis algorithm (RDA) in this paper. RDA integrates linear classification and beamspace transformation into a unified framework by formulating a constrained optimization problem. Our experimental results based on human subjects demonstrate that RDA can efficiently extract the discriminant pattern from prespecified cortical regions to accurately distinguish different brain states.

  15. EXIT Chart Analysis of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2007-01-01

    Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...... be extended to channels with larger output alphabets. By increasing the output alphabet from hard decisions to four symbols, a gain of more than 1.0 dB is achieved using optimized codes. For this code optimization, the mixing property of EXIT functions has to be modified to the case of binary message-passing...

  16. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    Science.gov (United States)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by

  17. Conventional Tanner Graph for Recursive onvolutional Codes and Associated Decoding

    Institute of Scientific and Technical Information of China (English)

    SUN Hong

    2001-01-01

    A different representation of recur-sive systematic convolutional (RSC) codes is pro-posed. This representation can be realized by a con-ventional Tanner graph. The graph becomes a treeby introducing hidden edge. It is shown that thesum-product algorithm applied to this graph modelis equivalent to the BCJR algorithm for turbo de-coding with lower computational complexity. Themessage-passing chain of the BCJR algorithm is pre-sented more exactly in the graph. In addition, theproposed representation of RSC codes provides an ef-ficient method to set up the trellis and the conven-tional Tanner graph for RSC codes provides directlythe architecture for decoding.

  18. Optimal encoding and decoding of a spin direction

    CERN Document Server

    Bagán, E; Brey, A; Muñoz-Tàpia, R; Tarrach, Rolf

    2001-01-01

    For a system of N spins 1/2 there are quantum states that can encode a direction in an intrinsic way. Information on this direction can later be decoded by means of a quantum measurement. We present here the optimal encoding and decoding procedure using the fidelity as a figure of merit. We compute the maximal fidelity and prove that it is directly related to the largest zeroes of the Legendre and Jacobi polynomials. We show that this maximal fidelity approaches unity quadratically in 1/N. We also discuss this result in terms of the dimension of the encoding Hilbert space.

  19. PERFORMANCE OF THREE STAGE TURBO-EQUALIZATION-DECODING

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya

    2003-01-01

    An increasing demand for high data rate transmission and protection over bandlimited channels with severe inter-symbol interference has resulted in a flurry of activity to improve channel equalization, In conjunction with equalization, channel coding-decoding can be employed to improve system performance. In this letter, the performance of the three stage turbo-equalization-decoding employing log maximum a posteriori probability is experimentally evaluated by a fading simulator. The BER is evaluated using various information sequence and interleaver sizes taking into account that the communication medium is a noisy inter symbol interference channel.

  20. Joint source/channel iterative arithmetic decoding with JPEG 2000 image transmission application

    Science.gov (United States)

    Zaibi, Sonia; Zribi, Amin; Pyndiah, Ramesh; Aloui, Nadia

    2012-12-01

    Motivated by recent results in Joint Source/Channel coding and decoding, we consider the decoding problem of Arithmetic Codes (AC). In fact, in this article we provide different approaches which allow one to unify the arithmetic decoding and error correction tasks. A novel length-constrained arithmetic decoding algorithm based on Maximum A Posteriori sequence estimation is proposed. The latter is based on soft-input decoding using a priori knowledge of the source-symbol sequence and the compressed bit-stream lengths. Performance in the case of transmission over an Additive White Gaussian Noise channel is evaluated in terms of Packet Error Rate. Simulation results show that the proposed decoding algorithm leads to significant performance gain while exhibiting very low complexity. The proposed soft input arithmetic decoder can also generate additional information regarding the reliability of the compressed bit-stream components. We consider the serial concatenation of the AC with a Recursive Systematic Convolutional Code, and perform iterative decoding. We show that, compared to tandem and to trellis-based Soft-Input Soft-Output decoding schemes, the proposed decoder exhibits the best performance/complexity tradeoff. Finally, the practical relevance of the presented iterative decoding system is validated under an image transmission scheme based on the JPEG 2000 standard and excellent results in terms of decoded image quality are obtained.

  1. 47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention... Attention Signal decoder will no longer be required and the two-tone Attention Signal will be used...

  2. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  3. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  4. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  5. Homogeneous Interpolation Problem and Key Equation for Decoding Reed-Solomon Codes

    Institute of Scientific and Technical Information of China (English)

    忻鼎稼

    1994-01-01

    The concept of homogeneous interpolation problem (HIP) over fields is introduced.It is discovered that solving HIP over finite fields is equivalent to decoding Reed-Solomon (RS) codes.The Welch-Berlekamp algorithm of decoding RS codes is derived;besides,by introducing the concept of incomplete locator of error patterns,the algorithm called incomplete iterative decoding is established.

  6. TC81220F (HAWK) MPEG 2 system decoder LSI; MPEG2 system decoder LSI TC81220F (HAWK)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    Satellite broadcasting, cable broadcasting, and ground wave broadcasting have been digitized gradually. In the video and audio data communication, the MPEG2 (Moving Picture Experts Group 2) technology for compression and extension is important. A system LSI that includes an MPEG2 decoder in a receiving set (Set Top Box) is required to each broadcasting. To deduce the system cost, Toshiba developed a TX3904 MCU (microcontroller) that controls the system, a transport processor that selects the packet-multiplexed data, and TC81220F (HAWK) that puts MPEG2 audio video decoders in one chip. (translated by NEDO)

  7. Polar Coding with CRC-Aided List Decoding

    Science.gov (United States)

    2015-08-01

    decoder estimates û1, . . . , ûN , one at a time, in order. For conve- nience, we introduce some non-standard notation. For any k ≤ N and any sequence of... recursive formula, but the true probability cannot be computed efficiently. The estimate differs from the true probability because the recursive formula

  8. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  9. Complete ML Decoding orf the (73,45) PG Code

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    Our recent proof of the completeness of decoding by list bit flipping is reviewed. The proof is based on an enumeration of all cosets of low weight in terms of their minimum weight and syndrome weight. By using a geometric description of the error patterns we characterize all remaining cosets....

  10. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven differe

  11. Spread codes and spread decoding in network coding

    OpenAIRE

    Manganiello, F; Gorla, E.; Rosenthal, J.

    2008-01-01

    In this paper we introduce the class of spread codes for the use in random network coding. Spread codes are based on the construction of spreads in finite projective geometry. The major contribution of the paper is an efficient decoding algorithm of spread codes up to half the minimum distance.

  12. The Effectiveness of Dictionary Examples in Decoding: The Case of ...

    African Journals Online (AJOL)

    rbr

    COBUILD English Language Dictionary, relies entirely on a corpus for its exam- ... corpus examples, while intermediate learners can learn from 'controlled' exam- ... learners' dictionaries, a possible indication of their difficulties with them. But .... tionary are mostly used for decoding rather than encoding linguistic activities.

  13. Relationships between grammatical encoding and decoding : an experimental psycholinguistic study

    NARCIS (Netherlands)

    Olsthoorn, Nomi Maria

    2007-01-01

    Although usually considered distinct processes, grammatical encoding and decoding have many theoretical and empirical commonalities. In two series of experiments relationships between the two processes are explored. The first series uses a dual task (edited reading aloud (ERA)) paradigm to test the

  14. Phonological Awareness and Decoding Skills in Deaf Adolescents

    Science.gov (United States)

    Gravenstede, L.

    2009-01-01

    This study investigated the phonological awareness skills of a group of deaf adolescents and how these skills correlated with decoding skills (single word and non-word reading) and receptive vocabulary. Twenty, congenitally profoundly deaf adolescents with at least average nonverbal cognitive skills were tested on a range of phonological awareness…

  15. Real Time Decoding of Color Symbol for Optical Positioning System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2015-01-01

    Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex back‐ grounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

  16. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

    OpenAIRE

    Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi

    2007-01-01

    A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.

  17. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.; Blokland, Y.M.; Sadakata, M.; Desain, P.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven differe

  18. Real Time Decoding of Color Symbol for Optical Positioning System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2015-01-01

    Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

  19. Decoding a combined amplitude modulated and frequency modulated signal

    DEFF Research Database (Denmark)

    2015-01-01

    The present disclosure relates to a method for decoding a combined AM/FM encoded signal, comprising the steps of: combining said encoded optical signal with light from a local oscillator configured with a local oscillator frequency; converting the combined local oscillator and encoded optical sig...

  20. Method for Veterbi decoding of large constraint length convolutional codes

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun

    1988-05-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  1. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  2. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.; Blokland, Y.M.; Sadakata, M.; Desain, P.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  3. Error Locked Encoder and Decoder for Nanomemory Application

    Directory of Open Access Journals (Sweden)

    Y. Sharath

    2014-03-01

    Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.

  4. Decoding the Disciplines: An Approach to Scientific Thinking

    Science.gov (United States)

    Pinnow, Eleni

    2016-01-01

    The Decoding the Disciplines methodology aims to teach students to think like experts in discipline-specific tasks. The central aspect of the methodology is to identify a bottleneck in the course content: a particular topic that a substantial number of students struggle to master. The current study compared the efficacy of standard lecture and…

  5. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...

  6. The Fluid Reading Primer: Animated Decoding Support for Emergent Readers.

    Science.gov (United States)

    Zellweger, Polle T.; Mackinlay, Jock D.

    A prototype application called the Fluid Reading Primer was developed to help emergent readers with the process of decoding written words into their spoken forms. The Fluid Reading Primer is part of a larger research project called Fluid Documents, which is exploring the use of interactive animation of typography to show additional information in…

  7. A Novel Decoder for Unknown Diversity Channels Employing Space-Time Codes

    Directory of Open Access Journals (Sweden)

    Erez Elona

    2002-01-01

    Full Text Available We suggest new decoding techniques for diversity channels employing space time codes (STC when the channel coefficients are unknown to both transmitter and receiver. Most of the existing decoders for unknown diversity channels employ training sequence in order to estimate the channel. These decoders use the estimates of the channel coefficients in order to perform maximum likelihood (ML decoding. We suggest an efficient implementation of the generalized likelihood ratio test (GLRT algorithm that improves the performance with only slight increase in complexity. We also suggest an energy weighted decoder (EWD that shows additional improvement without further increase in the computational complexity.

  8. On Pseudocodewords and Decision Regions of Linear Programming Decoding of HDPC Codes

    CERN Document Server

    Lifshitz, Asi

    2011-01-01

    In this paper we explore the decision regions of Linear Programming (LP) decoding. We compare the decision regions of an LP decoder, a Belief Propagation (BP) decoder and the optimal Maximum Likelihood (ML) decoder. We study the effect of minimal-weight pseudocodewords on LP decoding. We present global optimization as a method for finding the minimal pseudoweight of a given code as well as the number of minimal-weight generators. We present a complete pseudoweight distribution for the [24; 12; 8] extended Golay code, and provide justifications of why the pseudoweight distribution alone cannot be used for obtaining a tight upper bound on the error probability.

  9. New concatenated soft decoding of Reed-Solomon codes with lower complexities

    Institute of Scientific and Technical Information of China (English)

    BIAN Yin-bing; FENG Guang-zeng

    2009-01-01

    To improve error-correcting performance, an iterative concatenated soft decoding algorithm for Reed-Solomon (RS) codes is presented in this article. This algorithm brings both complexity as well as advantages in performance over presently popular sott decoding algorithms. The proposed algorithm consists of two powerful soft decoding techniques, adaptive belief propagation (ABP) and box and match algorithm (BMA), which are serially concatenated by the accumulated log-likelihood ratio (ALLR).Simulation results show that, compared with ABP and ABP-BMA algorithms, the proposed algorithm can bring more decoding gains and a better tradeoffbetween the decoding performance and complexity.

  10. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  11. Sums of Spike Waveform Features for Motor Decoding

    Directory of Open Access Journals (Sweden)

    Jie Li

    2017-07-01

    Full Text Available Traditionally, the key step before decoding motor intentions from cortical recordings is spike sorting, the process of identifying which neuron was responsible for an action potential. Recently, researchers have started investigating approaches to decoding which omit the spike sorting step, by directly using information about action potentials' waveform shapes in the decoder, though this approach is not yet widespread. Particularly, one recent approach involves computing the moments of waveform features and using these moment values as inputs to decoders. This computationally inexpensive approach was shown to be comparable in accuracy to traditional spike sorting. In this study, we use offline data recorded from two Rhesus monkeys to further validate this approach. We also modify this approach by using sums of exponentiated features of spikes, rather than moments. Our results show that using waveform feature sums facilitates significantly higher hand movement reconstruction accuracy than using waveform feature moments, though the magnitudes of differences are small. We find that using the sums of one simple feature, the spike amplitude, allows better offline decoding accuracy than traditional spike sorting by expert (correlation of 0.767, 0.785 vs. 0.744, 0.738, respectively, for two monkeys, average 16% reduction in mean-squared-error, as well as unsorted threshold crossings (0.746, 0.776; average 9% reduction in mean-squared-error. Our results suggest that the sums-of-features framework has potential as an alternative to both spike sorting and using unsorted threshold crossings, if developed further. Also, we present data comparing sorted vs. unsorted spike counts in terms of offline decoding accuracy. Traditional sorted spike counts do not include waveforms that do not match any template (“hash”, but threshold crossing counts do include this hash. On our data and in previous work, hash contributes to decoding accuracy. Thus, using the

  12. Fast multiple run_before decoding method for efficient implementation of an H.264/advanced video coding context-adaptive variable length coding decoder

    Science.gov (United States)

    Ki, Dae Wook; Kim, Jae Ho

    2013-07-01

    We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.

  13. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  14. A Generalization Belief Propagation Decoding Algorithm for Polar Codes Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a generalization belief propagation (BP decoding algorithm based on particle swarm optimization (PSO to improve the performance of the polar codes. Through the analysis of the existing BP decoding algorithm, we first introduce a probability modifying factor to each node of the BP decoder, so as to enhance the error correcting capacity of the decoding. Then, we generalize the BP decoding algorithm based on these modifying factors and drive the probability update equations for the proposed decoding. Based on the new probability update equations, we show the intrinsic relationship of the existing decoding algorithms. Finally, in order to achieve the best performance, we formulate an optimization problem to find the optimal probability modifying factors for the proposed decoding algorithm. Furthermore, a method based on the modified PSO algorithm is also introduced to solve that optimization problem. Numerical results show that the proposed generalization BP decoding algorithm achieves better performance than that of the existing BP decoding, which suggests the effectiveness of the proposed decoding algorithm.

  15. Decoding bipedal locomotion from the rat sensorimotor cortex

    Science.gov (United States)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  16. NEW ITERATIVE SUPER-TRELLIS DECODING WITH SOURCE A PRIORI INFORMATION FOR VLCS WITH TURBO CODES

    Institute of Scientific and Technical Information of China (English)

    Liu Jianjun; Tu Guofang; Wu Weiren

    2007-01-01

    A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simulation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs), when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.

  17. Trellis-Based Check Node Processing for Low-Complexity Nonbinary LP Decoding

    CERN Document Server

    Punekar, Mayur

    2011-01-01

    Linear Programming (LP) decoding is emerging as an attractive alternative to decode Low-Density Parity-Check (LDPC) codes. However, the earliest LP decoders proposed for binary and nonbinary LDPC codes are not suitable for use at moderate and large code lengths. To overcome this problem, Vontobel et al. developed an iterative Low-Complexity LP (LCLP) decoding algorithm for binary LDPC codes. The variable and check node calculations of binary LCLP decoding algorithm are related to those of binary Belief Propagation (BP). The present authors generalized this work to derive an iterative LCLP decoding algorithm for nonbinary linear codes. Contrary to binary LCLP, the variable and check node calculations of this algorithm are in general different from that of nonbinary BP. The overall complexity of nonbinary LCLP decoding is linear in block length; however the complexity of its check node calculations is exponential in the check node degree. In this paper, we propose a modified BCJR algorithm for efficient check n...

  18. Iterative decoding of Generalized Parallel Concatenated Block codes using cyclic permutations

    Directory of Open Access Journals (Sweden)

    Hamid Allouch

    2012-09-01

    Full Text Available Iterative decoding techniques have gain popularity due to their performance and their application in most communications systems. In this paper, we present a new application of our iterative decoder on the GPCB (Generalized Parallel Concatenated Block codes which uses cyclic permutations. We introduce a new variant of the component decoder. After extensive simulation; the obtained result is very promising compared with several existing methods. We evaluate the effects of various parameters component codes, interleaver size, block size, and the number of iterations. Three interesting results are obtained; the first one is that the performances in terms of BER (Bit Error Rate of the new constituent decoder are relatively similar to that of original one. Secondly our turbo decoding outperforms another turbo decoder for some linear block codes. Thirdly the proposed iterative decoding of GPCB-BCH (75, 51 is about 2.1dB from its Shannon limit.

  19. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  1. NetDecoder: a network biology platform that decodes context-specific biological networks and gene activities.

    Science.gov (United States)

    da Rocha, Edroaldo Lummertz; Ung, Choong Yong; McGehee, Cordelia D; Correia, Cristina; Li, Hu

    2016-06-02

    The sequential chain of interactions altering the binary state of a biomolecule represents the 'information flow' within a cellular network that determines phenotypic properties. Given the lack of computational tools to dissect context-dependent networks and gene activities, we developed NetDecoder, a network biology platform that models context-dependent information flows using pairwise phenotypic comparative analyses of protein-protein interactions. Using breast cancer, dyslipidemia and Alzheimer's disease as case studies, we demonstrate NetDecoder dissects subnetworks to identify key players significantly impacting cell behaviour specific to a given disease context. We further show genes residing in disease-specific subnetworks are enriched in disease-related signalling pathways and information flow profiles, which drive the resulting disease phenotypes. We also devise a novel scoring scheme to quantify key genes-network routers, which influence many genes, key targets, which are influenced by many genes, and high impact genes, which experience a significant change in regulation. We show the robustness of our results against parameter changes. Our network biology platform includes freely available source code (http://www.NetDecoder.org) for researchers to explore genome-wide context-dependent information flow profiles and key genes, given a set of genes of particular interest and transcriptome data. More importantly, NetDecoder will enable researchers to uncover context-dependent drug targets.

  2. Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor V

    2011-01-01

    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.

  3. Decode-and-Forward Based Differential Modulation for Cooperative Communication System with Unitary and Non-Unitary Constellations

    CERN Document Server

    Bhatnagar, Manav R

    2012-01-01

    In this paper, we derive a maximum likelihood (ML) decoder of the differential data in a decode-and-forward (DF) based cooperative communication system utilizing uncoded transmissions. This decoder is applicable to complex-valued unitary and non-unitary constellations suitable for differential modulation. The ML decoder helps in improving the diversity of the DF based differential cooperative system using an erroneous relaying node. We also derive a piecewise linear (PL) decoder of the differential data transmitted in the DF based cooperative system. The proposed PL decoder significantly reduces the decoding complexity as compared to the proposed ML decoder without any significant degradation in the receiver performance. Existing ML and PL decoders of the differentially modulated uncoded data in the DF based cooperative communication system are only applicable to binary modulated signals like binary phase shift keying (BPSK) and binary frequency shift keying (BFSK), whereas, the proposed decoders are applicab...

  4. Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.

    Science.gov (United States)

    Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup

    2011-09-01

    The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.

  5. Coupled Receiver/Decoders for Low-Rate Turbo Codes

    Science.gov (United States)

    Hamkins, Jon; Divsalar, Dariush

    2005-01-01

    been proposed for receiving weak single- channel phase-modulated radio signals bearing low-rate-turbo-coded binary data. Originally intended for use in receiving telemetry signals from distant spacecraft, the proposed receiver/ decoders may also provide enhanced reception in mobile radiotelephone systems. A radio signal of the type to which the proposal applies comprises a residual carrier signal and a phase-modulated data signal. The residual carrier signal is needed as a phase reference for demodulation as a prerequisite to decoding. Low-rate turbo codes afford high coding gains and thereby enable the extraction of data from arriving radio signals that might otherwise be too weak. In the case of a conventional receiver, if the signal-to-noise ratio (specifically, the symbol energy to one-sided noise power spectral density) of the arriving signal is below approximately 0 dB, then there may not be enough energy per symbol to enable the receiver to recover properly the carrier phase. One could solve the problem at the transmitter by diverting some power from the data signal to the residual carrier. A better solution . a coupled receiver/decoder according to the proposal . could reduce the needed amount of residual carrier power. In all that follows, it is to be understood that all processing would be digital and the incoming signals to be processed would be, more precisely, outputs of analog-to-digital converters that preprocess the residual carrier and data signals at a rate of multiple samples per symbol. The upper part of the figure depicts a conventional receiving system, in which the receiver and decoder are uncoupled, and which is also called a non-data-aided system because output data from the decoder are not used in the receiver to aid in recovering the carrier phase. The receiver tracks the carrier phase from the residual carrier signal and uses the carrier phase to wipe phase noise off the data signal. The receiver typically includes a phase-locked loop

  6. Performance-complexity tradeoff in sequential decoding for the unconstrained AWGN channel

    KAUST Repository

    Abediseid, Walid

    2013-06-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter - the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity. © 2013 IEEE.

  7. Memory bandwidth efficient two-layer reduced-resolution decoding of high-definition video

    Science.gov (United States)

    Comer, Mary L.

    2000-12-01

    This paper addresses the problem of efficiently decoding high- definition (HD) video for display at a reduced resolution. The decoder presented in this paper is intended for applications that are constrained not only in memory size, but also in peak memory bandwidth. This is the case, for example, during decoding of a high-definition television (HDTV) channel for picture-in-picture (PIP) display, if the reduced resolution PIP-channel decoder is sharing memory with the full-resolution main-channel decoder. The most significant source of video quality degradation in a reduced-resolution decoder is prediction drift, which is caused by the mismatch between the full-resolution reference frames used by the encoder and the subsampled reference frames used by the decoder. to mitigate the visually annoying effects of prediction drift, the decoder described in this paper operates at two different resolutions -- a lower resolution for B pictures, which do not contribute to prediction drift and a higher resolution for I and P pictures. This means that the motion-compensation unit (MCU) essentially operates at the higher resolution, but the peak memory bandwidth is the same as that required to decode at the lower resolution. Storage of additional data, representing the higher resolution for I and P pictures, requires a relatively small amount of additional memory as compared to decoding at the lower resolution. Experimental results will demonstrate the improvement in video quality achieved by the addition of the higher-resolution data in forming predictions for P pictures.

  8. Word decoding development in incremental phonics instruction in a transparent orthography.

    Science.gov (United States)

    Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo

    2017-01-01

    The present longitudinal study aimed to investigate the development of word decoding skills during incremental phonics instruction in Dutch as a transparent orthography. A representative sample of 973 Dutch children in the first grade (Mage  = 6;1, SD = 0;5) was exposed to incremental subsets of Dutch grapheme-phoneme correspondences during 6 consecutive blocks of 3 weeks of phonics instruction. Children's accuracy and efficiency of curriculum embedded word decoding were assessed after each incremental block, followed by a standardized word decoding measurement. Precursor measures of rapid naming, short-term memory, vocabulary, phonological awareness, and letter knowledge were assessed by the end of kindergarten and subsequently related to the word decoding efficiency in the first grade. The results showed that from the very beginning, children attained ceiling levels of decoding accuracy, whereas their efficiency scores increased despite the incremental character of the consecutive decoding assessments embedded in the curriculum. Structural equation modelling demonstrated high stability of the individual differences assessed by word decoding efficiency during phonics instruction during the first 5 months of the first grade. Curriculum embedded word decoding was highly related to standardized word decoding after phonics instruction was completed. Finally, early literacy and lexical retrieval, and to a lesser extent verbal and visual short term memory, predicted the first fundamental processes of mastering word decoding skills.

  9. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  10. Decoding of Convolutional Codes over the Erasure Channel

    CERN Document Server

    Tomás, Virtudes; Smarandache, Roxana

    2010-01-01

    In this paper we study the decoding capabilities of convolutional codes over the erasure channel. Of special interest will be maximum distance profile (MDP) convolutional codes. These are codes which have a maximum possible column distance increase. We show how this strong minimum distance condition of MDP convolutional codes help us to solve error situations that maximum distance separable (MDS) block codes fail to solve. Towards this goal, we define two subclasses of MDP codes: reverse-MDP convolutional codes and complete-MDP convolutional codes. Reverse-MDP codes have the capability to recover a maximum number of erasures using an algorithm which runs backward in time. Complete-MDP convolutional codes are both MDP and reverse-MDP codes. They are capable to recover the state of the decoder under the mildest condition. We show that complete-MDP convolutional codes perform in certain sense better than MDS block codes of the same rate over the erasure channel.

  11. Reaction Decoder Tool (RDT): extracting features from chemical reactions

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W.; Holliday, Gemma L.; Steinbeck, Christoph; Thornton, Janet M.

    2016-01-01

    Summary: Extracting chemical features like Atom–Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. Availability and implementation: This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder Contact: asad@ebi.ac.uk or s9asad@gmail.com PMID:27153692

  12. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    Science.gov (United States)

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  13. Design And Analysis Of Low Power Hierarchical Decoder

    Directory of Open Access Journals (Sweden)

    Abhinav Singh

    2012-11-01

    Full Text Available Due to the high degree of miniaturization possible today in semiconductor technology, the size and complexity of designs that may be implemented in hardware has increased dramatically. Process scaling has been used in the miniaturization process to reduce the area needed for logic functions in an effort to lower the product costs. Precharged Complementary Metal Oxide Semiconductor (CMOS domino logic techniques may be applied to functional blocks to reduce power. Domino logic forms an attractive design style for high performance designs since its low switching threshold and reduced transistor count leads to fast and area efficient circuit implementations. In this paper all the necessary components required to form a 5-to-32 bit decoder using domino logic are designed to perform different analysis at 180nm & 350 nm technologies. Decoderimplemented through domino logic is compared to static decoder.

  14. Codes on the Klein quartic, ideals, and decoding

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    1987-01-01

    descriptions as left ideals in the group-algebra GF(2^{3})[G]. This description allows for easy decoding. For instance, in the case of the single error correcting code of length21and dimension16with minimal distance3. decoding is obtained by multiplication with an idempotent in the group algebra.......A sequence of codes with particular symmetries and with large rates compared to their minimal distances is constructed over the field GF(2^{3}). In the sequence there is, for instance, a code of length 21 and dimension10with minimal distance9, and a code of length21and dimension16with minimal...... distance3. The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF(2^{3})allowed by Serre...

  15. Soft decoding a self-dual (48, 24; 12) code

    Science.gov (United States)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  16. Computational Complexity of Decoding Orthogonal Space-Time Block Codes

    CERN Document Server

    Ayanoglu, Ender; Karipidis, Eleftherios

    2009-01-01

    The computational complexity of optimum decoding for an orthogonal space-time block code G satisfying the orthogonality property that the Hermitian transpose of G multiplied by G is equal to a constant c times the sum of the squared symbols of the code times an identity matrix, where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends [1],[2], and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c > 1.

  17. Interactive decoding for the CCSDS recommendation for image data compression

    Science.gov (United States)

    García-Vílchez, Fernando; Serra-Sagristà, Joan; Zabala, Alaitz; Pons, Xavier

    2007-10-01

    In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of our technique to JPEG2000.

  18. HDL Implementation of Low Density Parity Check (LDPC Decoder

    Directory of Open Access Journals (Sweden)

    Pawandip Kaur

    2012-03-01

    Full Text Available Low-Density Parity-Check (LDPC codes are one of the most promising error-correcting codes approaching Shannon capacity and have been adopted in many applications. These codes offer huge advantages in terms of coding gain, throughput and power dissipation. Error correction algorithms are often implemented in hardware for fast processing to meet the real-time needs of communication systems. However hardwareimplementation of LDPC decoders using traditional Hardware Description Language (HDL based approach is a complex and time consuming task. In this paper HDL Implementation of Low Density Parity Check Decoder architecture is presented with different rates i.e. 1/2, 2/3, 3/4, 4/7, 8/9, 9/10 and variable data lengths i.e. 8, 16, 32, 64, 128, 256 bits and consequent changeable precision factor.

  19. Video watermarking with empirical PCA-based decoding.

    Science.gov (United States)

    Khalilian, Hanieh; Bajic, Ivan V

    2013-12-01

    A new method for video watermarking is presented in this paper. In the proposed method, data are embedded in the LL subband of wavelet coefficients, and decoding is performed based on the comparison among the elements of the first principal component resulting from empirical principal component analysis (PCA). The locations for data embedding are selected such that they offer the most robust PCA-based decoding. Data are inserted in the LL subband in an adaptive manner based on the energy of high frequency subbands and visual saliency. Extensive testing was performed under various types of attacks, such as spatial attacks (uniform and Gaussian noise and median filtering), compression attacks (MPEG-2, H. 263, and H. 264), and temporal attacks (frame repetition, frame averaging, frame swapping, and frame rate conversion). The results show that the proposed method offers improved performance compared with several methods from the literature, especially under additive noise and compression attacks.

  20. Two-Bit Bit Flipping Decoding of LDPC Codes

    CERN Document Server

    Nguyen, Dung Viet; Marcellin, Michael W

    2011-01-01

    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.

  1. Design of LOG-MAP / MAX-LOG-MAP Decoder

    Directory of Open Access Journals (Sweden)

    Mihai TIMIS, PhD Candidate, Dipl.Eng.

    2007-01-01

    Full Text Available The process of turbo-code decoding starts with the formation of a posteriori probabilities (APPs for each data bit, which is followed by choosing the data-bit value that corresponds to the maximum a posteriori (MAP probability for that data bit. Upon reception of a corrupted code-bit sequence, the process of decision making with APPs allows the MAP algorithm to determine the most likely information bit to have been transmitted at each bit time.

  2. Coset decomposition method for storing and decoding fingerprint data

    OpenAIRE

    Mohamed Sayed

    2014-01-01

    Biometrics such as fingerprints, irises, faces, voice, gait and hands are often used for access control, authentication and encryption instead of PIN and passwords. In this paper a syndrome decoding technique is proposed to provide a secure means of storing and matching various biometrics data. We apply an algebraic coding technique called coset decomposition to the model of fingerprint biometrics. The algorithm which reveals the matching between registered and probe fingerprints is modeled a...

  3. Context adaptive binary arithmetic decoding on transport triggered architectures

    Science.gov (United States)

    Rouvinen, Joona; Jääskeläinen, Pekka; Rintaluoma, Tero; Silvén, Olli; Takala, Jarmo

    2008-02-01

    Video coding standards, such as MPEG-4, H.264, and VC1, define hybrid transform based block motion compensated techniques that employ almost the same coding tools. This observation has been a foundation for defining the MPEG Reconfigurable Multimedia Coding framework that targets to facilitate multi-format codec design. The idea is to send a description of the codec with the bit stream, and to reconfigure the coding tools accordingly on-the-fly. This kind of approach favors software solutions, and is a substantial challenge for the implementers of mobile multimedia devices that aim at high energy efficiency. In particularly as high definition formats are about to be required from mobile multimedia devices, variable length decoders are becoming a serious bottleneck. Even at current moderate mobile video bitrates software based variable length decoders swallow a major portion of the resources of a mobile processor. In this paper we present a Transport Triggered Architecture (TTA) based programmable implementation for Context Adaptive Binary Arithmetic de-Coding (CABAC) that is used e.g. in the main profile of H.264 and in JPEG2000. The solution can be used even for other variable length codes.

  4. Efficient Network for Non-Binary QC-LDPC Decoder

    CERN Document Server

    Zhang, Chuan

    2011-01-01

    This paper presents approaches to develop efficient network for non-binary quasi-cyclic LDPC (QC-LDPC) decoders. By exploiting the intrinsic shifting and symmetry properties of the check matrices, significant reduction of memory size and routing complexity can be achieved. Two different efficient network architectures for Class-I and Class-II non-binary QC-LDPC decoders have been proposed, respectively. Comparison results have shown that for the code of the 64-ary (1260, 630) rate-0.5 Class-I code, the proposed scheme can save more than 70.6% hardware required by shuffle network than the state-of-the-art designs. The proposed decoder example for the 32-ary (992, 496) rate-0.5 Class-II code can achieve a 93.8% shuffle network reduction compared with the conventional ones. Meanwhile, based on the similarity of Class-I and Class-II codes, similar shuffle network is further developed to incorporate both classes of codes at a very low cost.

  5. LDPC Decoding for Signal Dependent Visible Light Communication Channels

    Institute of Scientific and Technical Information of China (English)

    YUAN Ming; SHA Xiaoshi; LIANG Xiao; JIANG Ming; WANG Jiaheng; ZHAO Chunming

    2016-01-01

    Avalanche photodiodes (APD) are widely employed in visible light communication (VLC) systems. The general signal dependent Gaussian channel is investigated. Experiment results reveal that symbols on different constellation points under official illumi⁃nance inevitably suffer from different levels of noise due to the multiplication process of APDs. In such a case, conventional log likely⁃hood ratio (LLR) calculation for signal independent channels may cause performance loss. The optimal LLR calculation for decoder is then derived because of the existence of non⁃ignorable APD shot noise. To find the decoding thresholds of the optimal and suboptimal detection schemes, the extrinsic information transfer (EXIT) chat is further analyzed. Finally a modified minimum sum algorithm is suggested with reduced complexity and acceptable performance loss. Numerical simulations show that, with a reg⁃ular (3, 6) low⁃density parity check (LDPC) code of block length 20,000, 0.7 dB gain is achieved with our proposed scheme over the LDPC decoder designed for signal independent noise. It is also found that the coding performance is improved for a larger modulation depth.

  6. The basis of orientation decoding in human primary visual cortex: fine- or coarse-scale biases?

    Science.gov (United States)

    Maloney, Ryan T

    2015-01-01

    Orientation signals in human primary visual cortex (V1) can be reliably decoded from the multivariate pattern of activity as measured with functional magnetic resonance imaging (fMRI). The precise underlying source of these decoded signals (whether by orientation biases at a fine or coarse scale in cortex) remains a matter of some controversy, however. Freeman and colleagues (J Neurosci 33: 19695-19703, 2013) recently showed that the accuracy of decoding of spiral patterns in V1 can be predicted by a voxel's preferred spatial position (the population receptive field) and its coarse orientation preference, suggesting that coarse-scale biases are sufficient for orientation decoding. Whether they are also necessary for decoding remains an open question, and one with implications for the broader interpretation of multivariate decoding results in fMRI studies. Copyright © 2015 the American Physiological Society.

  7. Elimination of the Background Noise of the Decoded Image in Fresnel Zone Plate Scanning Holography

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A method of digitally high pass filtering in frequency domain is proposed to eliminate the background noise of the decoded image in Fresnel zone plate scanning holography. The high pass filter is designed as a circular stop, which should be suitable to suppressing the background noise significantly and remain much low frequency information of the object. The principle of high pass filtering is that the Fourier transform of the decoded image is multiplied with the high pass filter. Thus the frequency spectrum of the decoded image without the background noise is achieved. By inverse Fourier transform of the spectrum of the decoded image after multiplying operation, the decoded image without the background noise is obtained. Both of the computer simulations and the experimental results show that the contrast and the signal-to-noise ratio of the decoded image are significantly improved with digital filtering.

  8. Design and Implementation of Single Chip WCDMA High Speed Channel Decoder

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A memory and driving clock efficient design scheme to achieve WCDMA high-speed channel decoder on a single XILINX' XVC1000E FPGA chip is presented. Using a modified MAP algorithm, say parallel Sliding Window logarithmic Maximum A Posterior (PSW-log-MAP), the on-chip turbo decoder can decode an information bit by only an average of two clocks per iteration. On the other hand, a high-parallel pipeline Viterbi algorithm is adopted to realize the 256-state convolutional code decoding. The final decoder with an 8×chip-clock (30.72MHz) driving can concurrently process a data rate up to 2.5Mbps of turbo coded sequences and a data rate over 400kbps of convolutional codes. There is no extern memory needed. Test results show that the decoding performance is only 0.2~0.3dB or less lost comparing to float simulation.

  9. Serial Min-max Decoding Algorithm Based on Variable Weighting for Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2013-09-01

    Full Text Available In this paper, we perform an analysis on the min-max decoding algorithm for nonbinary LDPC(low-density parity-check codes and propose serial min-max decoding algorithm. Combining with the weighted processing of the variable node message, we propose serial min-max decoding algorithm based on variable weighting for nonbinary LDPC codes in the end. The simulation indicates that when the bit error rate is 10^-3,compared with serial min-max decoding algorithm ,traditional min-max decoding algorithm and traditional minsum algorithm ,serial min-max decoding algorithm based on variable weighting can offer additional coding gain 0.2dB、0.8dB and 1.4dB respectively in additional white Gaussian noise channel and under binary phase shift keying modulation.  

  10. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches

    OpenAIRE

    Cho, Kyunghyun; van Merrienboer, Bart; Bahdanau, Dzmitry; Bengio, Yoshua

    2014-01-01

    Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and...

  11. Coding/decoding two-dimensional images with orbital angular momentum of light.

    Science.gov (United States)

    Chu, Jiaqi; Li, Xuefeng; Smithwick, Quinn; Chu, Daping

    2016-04-01

    We investigate encoding and decoding of two-dimensional information using the orbital angular momentum (OAM) of light. Spiral phase plates and phase-only spatial light modulators are used in encoding and decoding of OAM states, respectively. We show that off-axis points and spatial variables encoded with a given OAM state can be recovered through decoding with the corresponding complimentary OAM state.

  12. Complete Decoding and Reporting of Aviation Routine Weather Reports (METARs)

    Science.gov (United States)

    Lui, Man-Cheung Max

    2014-01-01

    Aviation Routine Weather Report (METAR) provides surface weather information at and around observation stations, including airport terminals. These weather observations are used by pilots for flight planning and by air traffic service providers for managing departure and arrival flights. The METARs are also an important source of weather data for Air Traffic Management (ATM) analysts and researchers at NASA and elsewhere. These researchers use METAR to correlate severe weather events with local or national air traffic actions that restrict air traffic, as one example. A METAR is made up of multiple groups of coded text, each with a specific standard coding format. These groups of coded text are located in two sections of a report: Body and Remarks. The coded text groups in a U.S. METAR are intended to follow the coding standards set by National Oceanic and Atmospheric Administration (NOAA). However, manual data entry and edits made by a human report observer may result in coded text elements that do not follow the standards, especially in the Remarks section. And contrary to the standards, some significant weather observations are noted only in the Remarks section and not in the Body section of the reports. While human readers can infer the intended meaning of non-standard coding of weather conditions, doing so with a computer program is far more challenging. However such programmatic pre-processing is necessary to enable efficient and faster database query when researchers need to perform any significant historical weather analysis. Therefore, to support such analysis, a computer algorithm was developed to identify groups of coded text anywhere in a report and to perform subsequent decoding in software. The algorithm considers common deviations from the standards and data entry mistakes made by observers. The implemented software code was tested to decode 12 million reports and the decoding process was able to completely interpret 99.93 of the reports. This

  13. VLSI design of turbo decoder for integrated communication system on a chip applications

    Science.gov (United States)

    Fang, Wai-Chi; Sethuram, Ashwin; Belevi, Kemal

    2003-01-01

    A high-throughput low-power turbo decoder core has been developed for integrated communication system applications such as satellite communications, wireless LAN, digital TV, cable modem, Digital Video Broadcast (DVB), and xDSL systems. The turbo decoder is based on convolutional constituent codes, which outperform all other Forward Error Correction techniques. This turbo decoder core is parameterizable and can be modified easily to fit any size for advanced communication system-on-chip products. The turbo decoder core provides Forward Error Correction of up to 15 Mbits/sec on a 0.13-micron CMOS FPGA prototyping chip at a power of 0.1 watts.

  14. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  15. Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2007-01-01

    Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes....

  16. Decoding skills in children with language impairment: contributions of phonological processing and classroom experiences.

    Science.gov (United States)

    Tambyraja, Sherine R; Farquharson, Kelly; Logan, Jessica A R; Justice, Laura M

    2015-05-01

    Children with language impairment (LI) often demonstrate difficulties with word decoding. Research suggests that child-level (i.e., phonological processing) and environmental-level (i.e., classroom quality) factors both contribute to decoding skills in typically developing children. The present study examined the extent to which these same factors influence the decoding skills of children with LI, and the extent to which classroom quality moderates the relationship between phonological processing and decoding. Kindergarten and first-grade children with LI (n = 198) were assessed on measures of phonological processing and decoding twice throughout the academic year. Live classroom observations were conducted to assess classroom quality with respect to emotional support and instructional support. Hierarchical regression analyses revealed that of the 3 phonological processing variables included, only phonological awareness significantly predicted spring decoding outcomes when controlling for children's age and previous decoding ability. One aspect of classroom quality (emotional support) was also predictive of decoding, but there was no significant interaction between classroom quality and phonological processing. This study provides further evidence that phonological awareness is an important skill to assess in children with LI and that high-quality classroom environments can be positively associated with children's decoding outcomes.

  17. An Efficient Soft Decoder of Block Codes Based on Compact Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-09-01

    Full Text Available Soft-decision decoding is an NP-hard problem with great interest to developers of communication systems. We present an efficient soft-decision decoder of linear block codes based on compact genetic algorithm (cGA and compare its performances with various other decoding algorithms including Shakeel algorithm. The proposed algorithm uses the dual code in contrast to Shakeel algorithm which uses the code itself. Hence, this new approach reduces the decoding complexity of high rates codes. The complexity and an optimized version of this new algorithm are also presented and discussed.

  18. Sensitivity analysis of the channel estimation deviation to the MAP decoding algorithm

    Institute of Scientific and Technical Information of China (English)

    WAN Ke; FAN Ping-zhi

    2006-01-01

    As a necessary input parameter for maximum a-posteriori(MAP) decoding algorithm,SNR is normally obtained from the channel estimation unit.Corresponding research indicated that SNR estimation deviation degraded the performance of Turbo decoding significantly.In this paper,MAP decoding algorithm with SNR estimation deviation was investigated in detail,and the degradation mechanism of Turbo decoding was explained analytically.The theoretical analysis and computer simulation disclosed the specific reasons for the performance degradation when SNR estimation was less than the actual value,and for the higher sensitivity of SNR estimation to long-frame Turbo codes.

  19. On the Joint Error-and-Erasure Decoding for Irreducible Polynomial Remainder Codes

    CERN Document Server

    Yu, Jiun-Hung

    2012-01-01

    A general class of polynomial remainder codes is considered. Such codes are very flexible in rate and length and include Reed-Solomon codes as a special case. As an extension of previous work, two joint error-and-erasure decoding approaches are proposed. In particular, both the decoding approaches by means of a fixed transform are treated in a way compatible with the error-only decoding. In the end, a collection of gcd-based decoding algorithm is obtained, some of which appear to be new even when specialized to Reed-Solomon codes.

  20. Analysis and design of raptor codes for joint decoding using Information Content evolution

    CERN Document Server

    Venkiah, Auguste; Declercq, David

    2007-01-01

    In this paper, we present an analytical analysis of the convergence of raptor codes under joint decoding over the binary input additive white noise channel (BIAWGNC), and derive an optimization method. We use Information Content evolution under Gaussian approximation, and focus on a new decoding scheme that proves to be more efficient: the joint decoding of the two code components of the raptor code. In our general model, the classical tandem decoding scheme appears to be a subcase, and thus, the design of LT codes is also possible.

  1. Continuous motion decoding from EMG using independent component analysis and adaptive model training.

    Science.gov (United States)

    Zhang, Qin; Xiong, Caihua; Chen, Wenbin

    2014-01-01

    Surface Electromyography (EMG) is popularly used to decode human motion intention for robot movement control. Traditional motion decoding method uses pattern recognition to provide binary control command which can only move the robot as predefined limited patterns. In this work, we proposed a motion decoding method which can accurately estimate 3-dimensional (3-D) continuous upper limb motion only from multi-channel EMG signals. In order to prevent the muscle activities from motion artifacts and muscle crosstalk which especially obviously exist in upper limb motion, the independent component analysis (ICA) was applied to extract the independent source EMG signals. The motion data was also transferred from 4-manifold to 2-manifold by the principle component analysis (PCA). A hidden Markov model (HMM) was proposed to decode the motion from the EMG signals after the model trained by an adaptive model identification process. Experimental data were used to train the decoding model and validate the motion decoding performance. By comparing the decoded motion with the measured motion, it is found that the proposed motion decoding strategy was feasible to decode 3-D continuous motion from EMG signals.

  2. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces

    Science.gov (United States)

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170

  3. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.

    Science.gov (United States)

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.

  4. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Science.gov (United States)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and

  5. Central Decoding for Multiple Description Codes based on Domain Partitioning

    Directory of Open Access Journals (Sweden)

    M. Spiertz

    2006-01-01

    Full Text Available Multiple Description Codes (MDC can be used to trade redundancy against packet loss resistance for transmitting data over lossy diversity networks. In this work we focus on MD transform coding based on domain partitioning. Compared to Vaishampayan’s quantizer based MDC, domain based MD coding is a simple approach for generating different descriptions, by using different quantizers for each description. Commonly, only the highest rate quantizer is used for reconstruction. In this paper we investigate the benefit of using the lower rate quantizers to enhance the reconstruction quality at decoder side. The comparison is done on artificial source data and on image data. 

  6. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  7. Design of 4:16 decoder using reversible logic gates

    Directory of Open Access Journals (Sweden)

    Santhi Chebiyyam

    2016-04-01

    Full Text Available Reversible logic has received great importance in the recent years because of its feature of reduction in power dissipation. It finds application in low power digital designs, quantum computing, nanotechnology, DNA computing etc. Large number of researches are currently ongoing on sequential and combinational circuits using reversible logic. Decoders are one of the most important circuits used in combinational logic. Different approaches have been proposed for their design. In this article, we have proposed a novel design of 4:16.

  8. Iterative Demodulation and Decoding Scheme with 16QAM

    Institute of Scientific and Technical Information of China (English)

    LIU Lin-nan; KUANG Jing-ming; LI Ming; FEI Ze-song

    2006-01-01

    Iterative demodulation and decoding scheme is analyzed and modulation labeling is considered to be one of the crucial factors to this scheme. By analyzing the existent mapping design criterion, four aspects are found as the key techniques for choosing a label mapping. Based on this discovery, a novel mapping design criterion is proposed and two label mappings are searched according to it. Simulation results show that the performance of BICM-ID using the novel mappings is better than the former ones. The extrinsic information transfer(EXIT) chart is introduced and it is used to evaluate the proposed mapping design criteria.

  9. Polynomial system solving for decoding linear codes and algebraic cryptanalysis

    OpenAIRE

    2009-01-01

    This thesis is devoted to applying symbolic methods to the problems of decoding linear codes and of algebraic cryptanalysis. The paradigm we employ here is as follows. We reformulate the initial problem in terms of systems of polynomial equations over a finite field. The solution(s) of such systems should yield a way to solve the initial problem. Our main tools for handling polynomials and polynomial systems in such a paradigm is the technique of Gröbner bases and normal form reductions. The ...

  10. A New Simple Ultrahigh Speed Decoding Algorithm for BCH Code

    Institute of Scientific and Technical Information of China (English)

    唐建军; 纪越峰

    2002-01-01

    In order to content with forward error correction (FEC) technology of the high-speed optical communication system, a new simple decoding algorithm for triple-error correcting Bose, Chaudhuri, and Hocquenghem (BCH) code is proposed. Without complicated matrix-operation or division-operation or intricate iterative algorithm, the algorithm is high efficient and high-speed because of its simplicity in structure. The result of hardware emulation confirms that the algorithm is feasible completely. Introduction of the parallel structure increases the speed of coding greatly. The algorithm can be used in the high-speed optical communication system and other fields.

  11. Joint Source-Channel Decoding Scheme for Image Transmission over Wireless Channel

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    We improve the iterative decoding algorithm by utilizing the "leaked" residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel.The experimental results show that using the residual redundancy of the compressed source in channel decoding is an effective method to improve the error correction performance.

  12. The Euclidean Algorithm for Generalized Minimum Distance Decoding of Reed-Solomon Codes

    CERN Document Server

    Kampf, Sabine

    2010-01-01

    This paper presents a method to merge Generalized Minimum Distance decoding of Reed-Solomon codes with the extended Euclidean algorithm. By merge, we mean that the steps taken to perform the Generalized Minimum Distance decoding are similar to those performed by the extended Euclidean algorithm. The resulting algorithm has a complexity of O(n^2).

  13. Decoding Ability in French as a Foreign Language and Language Learning Motivation

    Science.gov (United States)

    Erler, Lynn; Macaro, Ernesto

    2011-01-01

    This study examined the relationships between decoding ability (the ability to relate graphemes to phonemes) in French as a foreign language, self-reported use of such decoding, and dimensions of motivation, specifically self-efficacy and attribution, among young-beginner learners in England. It investigated whether these factors were related to a…

  14. Multi-Trial Guruswami–Sudan Decoding for Generalised Reed–Solomon Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Zeh, Alexander

    2013-01-01

    An iterated refinement procedure for the Guruswami–Sudan list decoding algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list...

  15. The Three Stages of Coding and Decoding in Listening Courses of College Japanese Specialty

    Science.gov (United States)

    Yang, Fang

    2008-01-01

    The main focus of research papers on listening teaching published in recent years is the theoretical meanings of decoding on the training of listening comprehension ability. Although in many research papers the bottom-up approach and top-down approach, information processing mode theory, are applied to illustrate decoding and to emphasize the…

  16. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  17. Iterative channel decoding of FEC-based multiple-description codes.

    Science.gov (United States)

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  18. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length.

  19. Using convolutional decoding to improve time delay and phase estimation in digital communications

    Energy Technology Data Exchange (ETDEWEB)

    Ormesher, Richard C. (Albuquerque, NM); Mason, John J. (Albuquerque, NM)

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  20. Error Recovery Properties and Soft Decoding of Quasi-Arithmetic Codes

    Directory of Open Access Journals (Sweden)

    Christine Guillemot

    2007-08-01

    Full Text Available This paper first introduces a new set of aggregated state models for soft-input decoding of quasi arithmetic (QA codes with a termination constraint. The decoding complexity with these models is linear with the sequence length. The aggregation parameter controls the tradeoff between decoding performance and complexity. It is shown that close-to-optimal decoding performance can be obtained with low values of the aggregation parameter, that is, with a complexity which is significantly reduced with respect to optimal QA bit/symbol models. The choice of the aggregation parameter depends on the synchronization recovery properties of the QA codes. This paper thus describes a method to estimate the probability mass function (PMF of the gain/loss of symbols following a single bit error (i.e., of the difference between the number of encoded and decoded symbols. The entropy of the gain/loss turns out to be the average amount of information conveyed by a length constraint on both the optimal and aggregated state models. This quantity allows us to choose the value of the aggregation parameter that will lead to close-to-optimal decoding performance. It is shown that the optimum position for the length constraint is not the last time instant of the decoding process. This observation leads to the introduction of a new technique for robust decoding of QA codes with redundancy which turns out to outperform techniques based on the concept of forbidden symbol.

  1. A NEW DECODER OF SYNCHRONOUS OPTICAL CODE DIVISION MULTIPLE ACCESS SYSTEMS USING SEGMENTED CORRELATION

    Institute of Scientific and Technical Information of China (English)

    Li Ou; Wu Jiangxing; Lan Julong

    2001-01-01

    A new segmented correlating decoder of synchronous optical CDMA using modified prime sequence codes is proposed. The performance of the proposed system is analyzed under the assumption of Poisson shot noise model for the receiver photodetector. The decoder technique is shown to be more effective to improve the bit error probability performance than the method using an optical hard-limiter.

  2. Row Reduction Applied to Decoding of Rank Metric and Subspace Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Nielsen, Johan Sebastian Rosenkilde; Li, Wenhui

    2017-01-01

    We show that decoding of ℓ-Interleaved Gabidulin codes, as well as list-ℓ decoding of Mahdavifar–Vardy (MV) codes can be performed by row reducing skew polynomial matrices. Inspired by row reduction of F[x] matrices, we develop a general and flexible approach of transforming matrices over skew...

  3. Word and Person Effects on Decoding Accuracy: A New Look at an Old Question

    Science.gov (United States)

    Gilbert, Jennifer K.; Compton, Donald L.; Kearns, Devin M.

    2011-01-01

    The purpose of this study was to extend the literature on decoding by bringing together two lines of research, namely person and word factors that affect decoding, using a crossed random-effects model. The sample was composed of 196 English-speaking Grade 1 students. A researcher-developed pseudoword list was used as the primary outcome measure.…

  4. Reconfigurable Forward Error Correction Decoder for Beyond 100 Gbps High Speed Optical Links

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Zibar, Darko;

    2015-01-01

    In this paper we propose a reconfigurable forward error correction decoder for beyond 100 Gbps high speed optical links. The decoders for product codes can be configured to support the applications at a rate of a multiple of 100 Gbps, which provides the flexibility and scalability....

  5. Block-Orthogonal Space-Time Code Structure and Its Impact on QRDM Decoding Complexity Reduction

    CERN Document Server

    Ren, Tian Peng; Yuen, Chau; Zhang, Er Yang

    2011-01-01

    Full-rate space time codes (STC) with rate = \\emph{number of transmit antennas} have high multiplexing gain, but high decoding complexity even when decoded using reduced-complexity decoders such as sphere or QRDM decoders. In this paper, we introduce a new code property of STC called \\emph{block-orthogonal} property, which can be exploited by QR-decomposition-based decoders to achieve significant decoding complexity reduction without performance loss. We show that such complexity reduction principle can benefit the existing algebraic codes such as Perfect and DjABBA codes due to their inherent (but previously undiscovered) block-orthogonal property. In addition, we construct and optimize new full-rate BOSTC (block-orthogonal STC) that further maximize the QRDM complexity reduction potential. Simulation results of bit error rate (BER) performance against decoding complexity show that the new BOSTC outperforms all previously known codes as long as the QRDM decoder operates in reduced-complexity mode, and the co...

  6. Early Word Decoding Ability as a Longitudinal Predictor of Academic Performance

    Science.gov (United States)

    Nordström, Thomas; Jacobson, Christer; Söderberg, Pernilla

    2016-01-01

    This study, using a longitudinal design with a Swedish cohort of young readers, investigates if children's early word decoding ability in second grade can predict later academic performance. In an effort to estimate the unique effect of early word decoding (grade 2) with academic performance (grade 9), gender and non-verbal cognitive ability were…

  7. Modelling the Implicit Learning of Phonological Decoding from Training on Whole-Word Spellings and Pronunciations

    Science.gov (United States)

    Pritchard, Stephen C.; Coltheart, Max; Marinus, Eva; Castles, Anne

    2016-01-01

    Phonological decoding is central to learning to read, and deficits in its acquisition have been linked to reading disorders such as dyslexia. Understanding how this skill is acquired is therefore important for characterising reading difficulties. Decoding can be taught explicitly, or implicitly learned during instruction on whole word spellings…

  8. Improved Max-Log-MAP Turbo Decoding by Maximization of Mutual Information Transfer

    Directory of Open Access Journals (Sweden)

    Karimi Hamid Reza

    2005-01-01

    Full Text Available The demand for low-cost and low-power decoder chips has resulted in renewed interest in low-complexity decoding algorithms. In this paper, a novel theoretical framework for improving the performance of turbo decoding schemes that use the max-log-MAP algorithm is proposed. This framework is based on the concept of maximizing the transfer of mutual information between the component decoders. The improvements in performance can be achieved by using optimized iteration-dependent correction weights to scale the a priori information at the input of each component decoder. A method for the offline computation of the correction weights is derived. It is shown that a performance which approaches that of a turbo decoder using the optimum MAP algorithm can be achieved, while maintaining the advantages of low complexity and insensitivity to input scaling inherent in the max-log-MAP algorithm. The resulting improvements in convergence of the turbo decoding process and the expedited transfer of mutual information between the component decoders are illustrated via extrinsic information transfer (EXIT charts.

  9. A Study on Decoding Models for the Reconstruction of Hand Trajectories from the Human Magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Hong Gi Yeom

    2014-01-01

    Full Text Available Decoding neural signals into control outputs has been a key to the development of brain-computer interfaces (BCIs. While many studies have identified neural correlates of kinematics or applied advanced machine learning algorithms to improve decoding performance, relatively less attention has been paid to optimal design of decoding models. For generating continuous movements from neural activity, design of decoding models should address how to incorporate movement dynamics into models and how to select a model given specific BCI objectives. Considering nonlinear and independent speed characteristics, we propose a hybrid Kalman filter to decode the hand direction and speed independently. We also investigate changes in performance of different decoding models (the linear and Kalman filters when they predict reaching movements only or predict both reach and rest. Our offline study on human magnetoencephalography (MEG during point-to-point arm movements shows that the performance of the linear filter or the Kalman filter is affected by including resting states for training and predicting movements. However, the hybrid Kalman filter consistently outperforms others regardless of movement states. The results demonstrate that better design of decoding models is achieved by incorporating movement dynamics into modeling or selecting a model according to decoding objectives.

  10. High Hardware Utilization and Low Memory Block Requirement Decoding of QC-LDPC Codes

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ling; LIU Rongke; HOU Yi; ZHANG Xiaolin

    2012-01-01

    This paper presents a simple yet effective decoding for general quasi-cyclic low-density parity-check (QC-LDPC) codes,which not only achieves high hardware utility efficiency (HUE),but also brings about great memory block reduction without any performance degradation.The main idea is to split the check matrix into several row blocks,then to perform the improved message passing computations sequentially block by block.As the decoding algorithm improves,the sequential tie between the two-phase computations is broken,so that the two-phase computations can be overlapped which bring in high HUE.Two overlapping schemes are also presented,each of which suits a different situation.In addition,an efficient memory arrangement scheme is proposed to reduce the great memory block requirement of the LDPC decoder.As an example,for the 0.4 rate LDPC code selected from Chinese Digital TV Terrestrial Broadcasting (DTTB),our decoding saves over 80% memory blocks compared with the conventional decoding,and the decoder achieves 0.97 HUE.Finally,the 0.4 rate LDPC decoder is implemented on an FPGA device EP2S30 (speed grade-5).Using 8 row processing units,the decoder can achieve a maximum net throughput of 28.5 Mbps at 20 iterations.

  11. A graphical model framework for decoding in the visual ERP-based BCI speller

    NARCIS (Netherlands)

    Martens, S.M.M.; Mooij, J.M.; Hill, N.J.; Farquhar, J.D.R.; Schölkopf, B.

    2011-01-01

    We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on

  12. Does Knowing What a Word Means Influence How Easily Its Decoding Is Learned?

    Science.gov (United States)

    Michaud, Mélissa; Dion, Eric; Barrette, Anne; Dupéré, Véronique; Toste, Jessica

    2017-01-01

    Theoretical models of word recognition suggest that knowing what a word means makes it easier to learn how to decode it. We tested this hypothesis with at-risk young students, a group that often responds poorly to conventional decoding instruction in which word meaning is not addressed systematically. A total of 53 first graders received explicit…

  13. Foreign Language Performance Requirements for Decoding by Native Speakers: A Study of Intelligibility of Foreigners' English.

    Science.gov (United States)

    Wigdorsky-Vogelsang, Leopoldo

    This work is intended to find replies to practical questions, such as how well native speakers of Spanish are decoded by native speakers of English, which errors interfere with decoding by the listener, and what the implications of the study might be for teaching. Fifteen Chileans were asked to tell stories in English, and several panels of native…

  14. 47 CFR 15.119 - Closed caption decoder requirements for analog television receivers.

    Science.gov (United States)

    2010-10-01

    ... FREQUENCY DEVICES Unintentional Radiators § 15.119 Closed caption decoder requirements for analog television... color assignment. Any color Mid-Row Code will turn off italics. If the least significant bit of a... may cause problems. Caption decoding circuitry must function properly when receiving signals from...

  15. Why introverts can't always tell who likes them: multitasking and nonverbal decoding.

    Science.gov (United States)

    Lieberman, M D; Rosenthal, R

    2001-02-01

    Despite personality theories suggesting that extraversion correlates with social skill, most studies have not found a positive correlation between extraversion and nonverbal decoding. The authors propose that introverts are less able to multitask and thus are poorer at nonverbal decoding, but only when it is a secondary task. Prior research has uniformly extracted the nonverbal decoding from its multitasking context and, consequently, never tested this hypothesis. In Studies 1-3, introverts exhibited a nonverbal decoding deficit, relative to extraverts, but only when decoding was a secondary rather than a primary task within a multitasking context. In Study 4, extraversion was found to correlate with central executive efficiency (r = .42) but not with storage capacity (r = .04). These results are discussed in terms of arousal theories of extraversion and the role of catecholamines (dopamine and norepinephrine) in prefrontal function.

  16. Performance Analysis of Algebraic Soft-Decision Decoding of Reed-Solomon Codes

    CERN Document Server

    Duggan, Andrew

    2007-01-01

    We investigate the decoding region for Algebraic Soft-Decision Decoding (ASD) of Reed-Solomon codes in a discrete, memoryless, additive-noise channel. An expression is derived for the error correction radius within which the soft-decision decoder produces a list that contains the transmitted codeword. The error radius for ASD is shown to be larger than that of Guruswami-Sudan hard-decision decoding for a subset of low-rate codes. These results are also extended to multivariable interpolation in the sense of Parvaresh and Vardy. An upper bound is then presented for ASD's probability of error, where an error is defined as the event that the decoder selects an erroneous codeword from its list. This new definition gives a more accurate bound on the probability of error of ASD than the results available in the literature.

  17. Row Reduction Applied to Decoding of Rank Metric and Subspace Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Nielsen, Johan Sebastian Rosenkilde; Li, Wenhui;

    2017-01-01

    We show that decoding of ℓ-Interleaved Gabidulin codes, as well as list-ℓ decoding of Mahdavifar–Vardy (MV) codes can be performed by row reducing skew polynomial matrices. Inspired by row reduction of F[x] matrices, we develop a general and flexible approach of transforming matrices over skew...... polynomial rings into a certain reduced form. We apply this to solve generalised shift register problems over skew polynomial rings which occur in decoding ℓ-Interleaved Gabidulin codes. We obtain an algorithm with complexity O(ℓμ2) where μ measures the size of the input problem and is proportional...... to the code length n in the case of decoding. Further, we show how to perform the interpolation step of list-ℓ-decoding MV codes in complexity O(ℓn2), where n is the number of interpolation constraints....

  18. Illustrating Color Evolution and Color Blindness by the Decoding Model of Color Vision

    CERN Document Server

    Lu, Chenguang

    2011-01-01

    A symmetrical model of color vision, the decoding model as a new version of zone model, was introduced. The model adopts new continuous-valued logic and works in a way very similar to the way a 3-8 decoder in a numerical circuit works. By the decoding model, Young and Helmholtz's tri-pigment theory and Hering's opponent theory are unified more naturally; opponent process, color evolution, and color blindness are illustrated more concisely. According to the decoding model, we can obtain a transform from RGB system to HSV system, which is formally identical to the popular transform for computer graphics provided by Smith (1978). Advantages, problems, and physiological tests of the decoding model are also discussed.

  19. Design and Implementation of a Novel Algorithm for Iterative Decoding of Product Codes

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A novel product code iterative decoding algorithm and its high speed implementation scheme are proposed in this paper. Based on partial combination of selected columns of check matrix, the reduced-complexity syndrome decoding method is proposed to decode sub-codes of product code and deliver soft output information. So iterative decoding of product codes is possible. The fast sorting algorithm and a look-up method are proposed for high speed implementation of this algorithm. Compared to the conventional weighing iterative algorithm, the proposed algorithm has lower complexity while offering better performance, which is demonstrated by simulations and implementation analysis. The implementation scheme and verilog HDL simulation show that it is feasible to achieve high speed decoding with the proposed algorithm.

  20. Achieving a vanishing SNR-gap to exact lattice decoding at a subexponential complexity

    CERN Document Server

    Singh, Arun; Jalden, Joakim

    2011-01-01

    The work identifies the first lattice decoding solution that achieves, in the general outage-limited MIMO setting and in the high-rate and high-SNR limit, both a vanishing gap to the error-performance of the (DMT optimal) exact solution of preprocessed lattice decoding, as well as a computational complexity that is subexponential in the number of codeword bits. The proposed solution employs lattice reduction (LR)-aided regularized (lattice) sphere decoding and proper timeout policies. These performance and complexity guarantees hold for most MIMO scenarios, all reasonable fading statistics, all channel dimensions and all full-rate lattice codes. In sharp contrast to the above manageable complexity, the complexity of other standard preprocessed lattice decoding solutions is shown here to be extremely high. Specifically the work is first to quantify the complexity of these lattice (sphere) decoding solutions and to prove the surprising result that the complexity required to achieve a certain rate-reliability pe...

  1. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  2. Reduced-Complexity Decoder of Long Reed-Solomon Codes Based on Composite Cyclotomic Fourier Transforms

    CERN Document Server

    Wu, Xuebin

    2011-01-01

    Long Reed-Solomon (RS) codes are desirable for digital communication and storage systems due to their improved error performance, but the high computational complexity of their decoders is a key obstacle to their adoption in practice. As discrete Fourier transforms (DFTs) can evaluate a polynomial at multiple points, efficient DFT algorithms are promising in reducing the computational complexities of syndrome based decoders for long RS codes. In this paper, we first propose partial composite cyclotomic Fourier transforms (CCFTs) and then devise syndrome based decoders for long RS codes over large finite fields based on partial CCFTs. The new decoders based on partial CCFTs achieve a significant saving of computational complexities for long RS codes. Since partial CCFTs have modular and regular structures, the new decoders are suitable for hardware implementations. To further verify and demonstrate the advantages of partial CCFTs, we implement in hardware the syndrome computation block for a $(2720, 2550)$ sho...

  3. An Interpolation Procedure for List Decoding Reed--Solomon codes Based on Generalized Key Equations

    CERN Document Server

    Zeh, Alexander; Augot, Daniel

    2011-01-01

    The key step of syndrome-based decoding of Reed-Solomon codes up to half the minimum distance is to solve the so-called Key Equation. List decoding algorithms, capable of decoding beyond half the minimum distance, are based on interpolation and factorization of multivariate polynomials. This article provides a link between syndrome-based decoding approaches based on Key Equations and the interpolation-based list decoding algorithms of Guruswami and Sudan for Reed-Solomon codes. The original interpolation conditions of Guruswami and Sudan for Reed-Solomon codes are reformulated in terms of a set of Key Equations. These equations provide a structured homogeneous linear system of equations of Block-Hankel form, that can be solved by an adaption of the Fundamental Iterative Algorithm. For an $(n,k)$ Reed-Solomon code, a multiplicity $s$ and a list size $\\listl$, our algorithm has time complexity \\ON{\\listl s^4n^2}.

  4. Fast coeff_token decoding method and new memory architecture design for an efficient H.264/AVC context-based adaptive variable length coding decoder

    Science.gov (United States)

    Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun

    2009-12-01

    A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.

  5. A method for decoding the neurophysiological spike-response transform.

    Science.gov (United States)

    Stern, Estee; García-Crescioni, Keyla; Miller, Mark W; Peskin, Charles S; Brezina, Vladimir

    2009-11-15

    Many physiological responses elicited by neuronal spikes-intracellular calcium transients, synaptic potentials, muscle contractions-are built up of discrete, elementary responses to each spike. However, the spikes occur in trains of arbitrary temporal complexity, and each elementary response not only sums with previous ones, but can itself be modified by the previous history of the activity. A basic goal in system identification is to characterize the spike-response transform in terms of a small number of functions-the elementary response kernel and additional kernels or functions that describe the dependence on previous history-that will predict the response to any arbitrary spike train. Here we do this by developing further and generalizing the "synaptic decoding" approach of Sen et al. (1996). Given the spike times in a train and the observed overall response, we use least-squares minimization to construct the best estimated response and at the same time best estimates of the elementary response kernel and the other functions that characterize the spike-response transform. We avoid the need for any specific initial assumptions about these functions by using techniques of mathematical analysis and linear algebra that allow us to solve simultaneously for all of the numerical function values treated as independent parameters. The functions are such that they may be interpreted mechanistically. We examine the performance of the method as applied to synthetic data. We then use the method to decode real synaptic and muscle contraction transforms.

  6. Frame Decoder for Consultative Committee for Space Data Systems (CCSDS)

    Science.gov (United States)

    Reyes, Miguel A. De Jesus

    2014-01-01

    GNU Radio is a free and open source development toolkit that provides signal processing to implement software radios. It can be used with low-cost external RF hardware to create software defined radios, or without hardware in a simulation-like environment. GNU Radio applications are primarily written in Python and C++. The Universal Software Radio Peripheral (USRP) is a computer-hosted software radio designed by Ettus Research. The USRP connects to a host computer via high-speed Gigabit Ethernet. Using the open source Universal Hardware Driver (UHD), we can run GNU Radio applications using the USRP. An SDR is a "radio in which some or all physical layer functions are software defined"(IEEE Definition). A radio is any kind of device that wirelessly transmits or receives radio frequency (RF) signals in the radio frequency. An SDR is a radio communication system where components that have been typically implemented in hardware are implemented in software. GNU Radio has a generic packet decoder block that is not optimized for CCSDS frames. Using this generic packet decoder will add bytes to the CCSDS frames and will not permit for bit error correction using Reed-Solomon. The CCSDS frames consist of 256 bytes, including a 32-bit sync marker (0x1ACFFC1D). This frames are generated by the Space Data Processor and GNU Radio will perform the modulation and framing operations, including frame synchronization.

  7. Decoding covert shifts of attention induced by ambiguous visuospatial cues

    Directory of Open Access Journals (Sweden)

    Romain eTrachel

    2015-06-01

    Full Text Available Simple and unambiguous visual cues (e.g. an arrow can be used to trigger covert shifts of visual attention away from the center of gaze. The processing of visual stimuli is enhanced at the attended location. Covert shifts of attention modulate the power of cerebral oscillations in the alpha band over parietal and occipital regions. These modulations are sufficiently robust to be decoded on a single trial basis from electro-encephalography (EEG signals. It is often assumed that covert attention shifts are under voluntary control, and also occur in more natural and complex environments, but there is no direct evidence to support this assumption. We address this important issue by using random-dot stimuli to cue one of two opposite locations, where a visual target is presented. We contrast two conditions in which the random-dot motion is either predictive of the target location or contains ambiguous information. Behavioral results show attention shifts in anticipation of the visual target, in both conditions. In addition, these attention shifts involve similar neural sources, and the EEG can be decoded on a single trial basis. These results shed a new light on the behavioral and neural correlates of visuospatial attention, with implications for Brain-Computer Interfaces (BCI based on covert attention shifts.

  8. Decoding quantum information via the Petz recovery map

    Science.gov (United States)

    Beigi, Salman; Datta, Nilanjana; Leditzky, Felix

    2016-08-01

    We obtain a lower bound on the maximum number of qubits, Q n , ɛ ( N ) , which can be transmitted over n uses of a quantum channel N , for a given non-zero error threshold ɛ. To obtain our result, we first derive a bound on the one-shot entanglement transmission capacity of the channel, and then compute its asymptotic expansion up to the second order. In our method to prove this achievability bound, the decoding map, used by the receiver on the output of the channel, is chosen to be the Petz recovery map (also known as the transpose channel). Our result, in particular, shows that this choice of the decoder can be used to establish the coherent information as an achievable rate for quantum information transmission. Applying our achievability bound to the 50-50 erasure channel (which has zero quantum capacity), we find that there is a sharp error threshold above which Q n , ɛ ( N ) scales as √{ n } .

  9. Artificial spatiotemporal touch inputs reveal complementary decoding in neocortical neurons.

    Science.gov (United States)

    Oddo, Calogero M; Mazzoni, Alberto; Spanne, Anton; Enander, Jonas M D; Mogensen, Hannes; Bengtsson, Fredrik; Camboni, Domenico; Micera, Silvestro; Jörntell, Henrik

    2017-04-04

    Investigations of the mechanisms of touch perception and decoding has been hampered by difficulties in achieving invariant patterns of skin sensor activation. To obtain reproducible spatiotemporal patterns of activation of sensory afferents, we used an artificial fingertip equipped with an array of neuromorphic sensors. The artificial fingertip was used to transduce real-world haptic stimuli into spatiotemporal patterns of spikes. These spike patterns were delivered to the skin afferents of the second digit of rats via an array of stimulation electrodes. Combined with low-noise intra- and extracellular recordings from neocortical neurons in vivo, this approach provided a previously inaccessible high resolution analysis of the representation of tactile information in the neocortical neuronal circuitry. The results indicate high information content in individual neurons and reveal multiple novel neuronal tactile coding features such as heterogeneous and complementary spatiotemporal input selectivity also between neighboring neurons. Such neuronal heterogeneity and complementariness can potentially support a very high decoding capacity in a limited population of neurons. Our results also indicate a potential neuroprosthetic approach to communicate with the brain at a very high resolution and provide a potential novel solution for evaluating the degree or state of neurological disease in animal models.

  10. A New Low Computational Complexity Sphere Decoding Algorithm

    CERN Document Server

    Li, Boyu

    2009-01-01

    The complexity of sphere decoding (SD) has been widely studied in that the algorithm is vital in providing the optimal Maximum Likelihood performance with low complexity. In this paper, we propose a proper tree search technique that reduces overall SD computational complexity without sacrificing performance. We build a check-table to pre-calculate and store some terms, temporally store some mid-stage terms, and take advantage of a new lattice representation of our previous work. This method allows significant reduction for the number of operations required to decode the transmitted symbols. We consider 2x2 and 4x4 systems employing 4-QAM and 64-QAM, and show that this approach achieves large gains for average number of real multiplications and real additions, which range from 70% to 90% and 40% to 75% respectively, depending on the number of antennas and the constellation size of modulation schemes. We also show that these complexity gains become greater when the system dimension and the modulation levels bec...

  11. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    Science.gov (United States)

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  12. Decoding the Locus of Covert Visuospatial Attention from EEG Signals.

    Science.gov (United States)

    Thiery, Thomas; Lajnef, Tarek; Jerbi, Karim; Arguin, Martin; Aubin, Mercedes; Jolicoeur, Pierre

    2016-01-01

    Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM) and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA). We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p visuospatial attention decoding and future paths for research are proposed.

  13. Implementation of a MAP decoder for use in UMTS receivers

    Science.gov (United States)

    Habermann, Joachim

    2001-10-01

    Inter-user interference in mobile radio systems based on CDMA can be reduced with the aid of multi-user detection. As a consequence of the reduced interference the system capacity is increased for a given quality of service. The forthcoming UMTS system based on CDMA and FDD will apply long scrambling codes, thus, from a receiver implementation point of view random spreading codes are used. Random spreading codes, however, increase implementation complexity dramatically in a multi-user detection receiver. A receiver which shows both an acceptable implementation complexity and a bit error rate performance which is close to the single user bound, is the coded parallel interference cancellation (PIC) receiver. In principle, the coded PIC algorithm performs the following operations: after despreading and deinterleaving the receiver calculates soft values of the coded data with the aid of the maximum a posteriori (MAP) algorithm. The soft values are then respread and rechanneled. If all users are considered in the receiver, the multiple access interference can, ideally, be cancelled out. Through further stages in the receiver, i.e., doing the same procedure again, the performance of the single user receiver can approximately e obtained, as simulations have shown. In order to implement the coded PIC algorithm, its most complex component, which is the MAP decoder, has to be investigated in more detail. MAP decoding requires the calculation of probabilities and the storage over the entire data block because of a forward and backward recursion. These probabilities are obtained through multiplications and additions. To simplify the MAP algorithm on a large scale, conventionally the LOGMAP algorithm is used, which substitutes multiplications into additions with the aid of the logarithmic function and an additional correction function. To come up with a VHDL implementation of the LOGMAP algorithm, firstly a study into the bit resolution of a fixed point implementation is

  14. Turbo Interference Cancellation/Decoding for Convolutionally Coded Multi-Carrier DS-CDMA

    Institute of Scientific and Technical Information of China (English)

    GANLiangcai; XUGuoxing; HUANGTianxi

    2005-01-01

    In this paper, by applying Turbo principle to convolutionally coded multi-carrier Direct-sequence Code-division multiple access (DS-CDMA), we propose Turbo Parallel interference cancellation integrating Frequency diversity combining (FDC-PIC), which is termed Turbo FDC-PIC/decoding, and by analysis, several modifications are given. We make analysis and simulations for Turbo FDC-PIC/decoding and its modifications by the computer. The simulation results show that Turbo FDC-PIC/decoding acquires better performance with a posteriori Log-likelihood rate (LLR) feedback than with a priori LLR feedback. Whether hard, or soft decisions, Turbo FDC-PIC/decoding with a posteriori LLR feedback acquires better performance, while Turbo FDC-PIC/decoding with a priori LLR feedback acquires good performance only with soft decision. Besides, Turbo FDC-PIC/decoding with a posteriori LLR feedback has a lower complexity than tha twith a priori LLR feedback, which indicates that Turbo FDC-PIC/decoding with a posteriori LLR feedback is more favorable for future application because of its better performance and relatively lower implementation complexity.

  15. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  16. On the Computational Complexity of Sphere Decoder for Lattice Space-Time Coded MIMO Channel

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    The exact complexity analysis of the basic sphere decoder for general space-time codes applied to multi-input multi-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi-static, LAttice Space-Time (LAST) coded MIMO channel. Specifically, we derive the asymptotic tail distribution of the decoder's computational complexity in the high signal-to-noise ratio (SNR) regime. For the uncoded $M\\times N$ MIMO channel (e.g., V-BLAST), the analysis in [6] revealed that the tail distribution of such a decoder is of a Pareto-type with tail exponent that is equivalent to $N-M+1$. In our analysis, we show that the tail exponent of the sphere decoder's complexity distribution is equivalent to the diversity-multiplexing tradeoff achieved by LAST coding and lattice decoding schemes. This leads to extend the channel's tradeoff to include the decoding complexity. Moreover, we show analytically how minimum-mean square-error decisio...

  17. Decoding the Semantic Content of Natural Movies from Human Brain Activity

    Science.gov (United States)

    Huth, Alexander G.; Lee, Tyler; Nishimoto, Shinji; Bilenko, Natalia Y.; Vu, An T.; Gallant, Jack L.

    2016-01-01

    One crucial test for any quantitative model of the brain is to show that the model can be used to accurately decode information from evoked brain activity. Several recent neuroimaging studies have decoded the structure or semantic content of static visual images from human brain activity. Here we present a decoding algorithm that makes it possible to decode detailed information about the object and action categories present in natural movies from human brain activity signals measured by functional MRI. Decoding is accomplished using a hierarchical logistic regression (HLR) model that is based on labels that were manually assigned from the WordNet semantic taxonomy. This model makes it possible to simultaneously decode information about both specific and general categories, while respecting the relationships between them. Our results show that we can decode the presence of many object and action categories from averaged blood-oxygen level-dependent (BOLD) responses with a high degree of accuracy (area under the ROC curve > 0.9). Furthermore, we used this framework to test whether semantic relationships defined in the WordNet taxonomy are represented the same way in the human brain. This analysis showed that hierarchical relationships between general categories and atypical examples, such as organism and plant, did not seem to be reflected in representations measured by BOLD fMRI. PMID:27781035

  18. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    Science.gov (United States)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  19. Inference and Decoding of Motor Cortex Low-Dimensional Dynamics via Latent State-Space Models.

    Science.gov (United States)

    Aghagolzadeh, Mehdi; Truccolo, Wilson

    2016-02-01

    Motor cortex neuronal ensemble spiking activity exhibits strong low-dimensional collective dynamics (i.e., coordinated modes of activity) during behavior. Here, we demonstrate that these low-dimensional dynamics, revealed by unsupervised latent state-space models, can provide as accurate or better reconstruction of movement kinematics as direct decoding from the entire recorded ensemble. Ensembles of single neurons were recorded with triple microelectrode arrays (MEAs) implanted in ventral and dorsal premotor (PMv, PMd) and primary motor (M1) cortices while nonhuman primates performed 3-D reach-to-grasp actions. Low-dimensional dynamics were estimated via various types of latent state-space models including, for example, Poisson linear dynamic system (PLDS) models. Decoding from low-dimensional dynamics was implemented via point process and Kalman filters coupled in series. We also examined decoding based on a predictive subsampling of the recorded population. In this case, a supervised greedy procedure selected neuronal subsets that optimized decoding performance. When comparing decoding based on predictive subsampling and latent state-space models, the size of the neuronal subset was set to the same number of latent state dimensions. Overall, our findings suggest that information about naturalistic reach kinematics present in the recorded population is preserved in the inferred low-dimensional motor cortex dynamics. Furthermore, decoding based on unsupervised PLDS models may also outperform previous approaches based on direct decoding from the recorded population or on predictive subsampling.

  20. Cellular automaton decoders of topological quantum memories in the fault tolerant setting

    Science.gov (United States)

    Herold, Michael; Kastoryano, Michael J.; Campbell, Earl T.; Eisert, Jens

    2017-06-01

    Active error decoding and correction of topological quantum codes—in particular the toric code—remains one of the most viable routes to large scale quantum information processing. In contrast, passive error correction relies on the natural physical dynamics of a system to protect encoded quantum information. However, the search is ongoing for a completely satisfactory passive scheme applicable to locally interacting two-dimensional systems. Here, we investigate dynamical decoders that provide passive error correction by embedding the decoding process into local dynamics. We propose a specific discrete time cellular-automaton decoder in the fault tolerant setting and provide numerical evidence showing that the logical qubit has a survival time extended by several orders of magnitude over that of a bare unencoded qubit. We stress that (asynchronous) dynamical decoding gives rise to a Markovian dissipative process. We hence equate cellular-automaton decoding to a fully dissipative topological quantum memory, which removes errors continuously. In this sense, uncontrolled and unwanted local noise can be corrected for by a controlled local dissipative process. We analyze the required resources, commenting on additional polylogarithmic factors beyond those incurred by an ideal constant resource dynamical decoder.

  1. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Chen Min

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  2. A Rate-Distortion Perspective on Multiple Decoding Attempts for Reed-Solomon Codes

    CERN Document Server

    Nguyen, Phong S; Narayanan, Krishna R

    2009-01-01

    Recently, a number of authors have proposed decoding schemes for Reed-Solomon (RS) codes based on multiple trials of a simple RS decoding algorithm. In this paper, we present a rate-distortion (R-D) approach to analyze these multiple-decoding algorithms for RS codes. This approach is first used to understand the asymptotic performance-versus-complexity trade-off of multiple error-and-erasure decoding of RS codes. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the condition for a single error-and-erasure decoding to succeed reduces to a form where the distortion is compared to a fixed threshold. Finding the best set of erasure patterns for multiple decoding trials then turns out to be a covering problem which can be solved asymptotically by rate-distortion theory. Next, this approach is extended to analyze multiple algebraic soft-decision (ASD) decoding of RS codes. Both analytical and numerical computations of the R-D functions for the corresponding distortion m...

  3. Decoding hand movement velocity from electroencephalogram signals during a drawing task

    Directory of Open Access Journals (Sweden)

    Gu Zhenghui

    2010-10-01

    Full Text Available Abstract Background Decoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG and electroencephalogram (EEG. Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear. Methods Here we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm. Results The average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4 Hz band but also oscillatory rhythms in 24-28 Hz band may carry the information of hand velocity. Conclusions These results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.

  4. Ensemble Fractional Sensitivity: A Quantitative Approach to Neuron Selection for Decoding Motor Tasks

    Directory of Open Access Journals (Sweden)

    Girish Singhal

    2010-01-01

    Full Text Available A robust method to help identify the population of neurons used for decoding motor tasks is developed. We use sensitivity analysis to develop a new metric for quantifying the relative contribution of a neuron towards the decoded output, called “fractional sensitivity.” Previous model-based approaches for neuron ranking have been shown to largely depend on the collection of training data. We suggest the use of an ensemble of models that are trained on random subsets of trials to rank neurons. For this work, we tested a decoding algorithm on neuronal data recorded from two male rhesus monkeys while they performed a reach to grasp a bar at three orientations (45∘, 90∘, or 135∘. An ensemble approach led to a statistically significant increase of 5% in decoding accuracy and 25% increase in identification accuracy of simulated noisy neurons, when compared to a single model. Furthermore, ranking neurons based on the ensemble fractional sensitivities resulted in decoding accuracies 10%–20% greater than when randomly selecting neurons or ranking based on firing rates alone. By systematically reducing the size of the input space, we determine the optimal number of neurons needed for decoding the motor output. This selection approach has practical benefits for other BMI applications where limited number of electrodes and training datasets are available, but high decoding accuracies are desirable.

  5. High-performance VLSI architectures for turbo decoders with QPP interleaver

    Science.gov (United States)

    Verma, Shivani; Kumar, S.

    2015-04-01

    This paper analyses different VLSI architectures for 3GPP LTE/LTE-advanced turbo decoders for trade-offs in terms of throughput and area requirement. Data flow graphs for standard SISO MAP (maximum a posteriori) turbo decoder, SW - SISO MAP turbo decoder, PW SISO MAP turbo decoder have been presented, thus analysing their performance. Two variants of quadratic permutation polynomial (QPP) interleaver have been proposed which tend to simplify the complexity of 'mod' operator implementation and provide best compromise between area, delay and power dissipation. Implementation of decoder using one variant of QPP interleaver has also been discussed. A novel approach for area optimisation has been proposed to reduce required number of interleavers for parallel window turbo decoder. Multi-port memory has also been used for parallel turbo decoder. To increase the throughput without any effective increase in area complexity, circuit-level pipelining and retiming have been used. Proposed architectures have been synthesised using Synopsys Design Compiler using 45-nm CMOS technology.

  6. Using a Three-Step Decoding Strategy with Constant Time Delay to Teach Word Reading to Students with Mild and Moderate Mental Retardation

    Science.gov (United States)

    Tucker Cohen, Elisabeth; Heller, Kathryn Wolff; Alberto, Paul; Fredrick, Laura D.

    2008-01-01

    The use of a three-step decoding strategy with constant time delay for teaching decoding and word reading to students with mild and moderate mental retardation was investigated in this study. A multiple probe design was used to examine the percentage of words correctly decoded and read as well as the percentage of sounds correctly decoded. The…

  7. Decoding continuous limb movements from high-density epidural electrode arrays using custom spatial filters

    Science.gov (United States)

    Marathe, A. R.; Taylor, D. M.

    2013-06-01

    Objective. Our goal was to identify spatial filtering methods that would improve decoding of continuous arm movements from epidural field potentials as well as demonstrate the use of the epidural signals in a closed-loop brain-machine interface (BMI) system in monkeys. Approach. Eleven spatial filtering options were compared offline using field potentials collected from 64-channel high-density epidural arrays in monkeys. Arrays were placed over arm/hand motor cortex in which intracortical microelectrodes had previously been implanted and removed leaving focal cortical damage but no lasting motor deficits. Spatial filters tested included: no filtering, common average referencing (CAR), principle component analysis, and eight novel modifications of the common spatial pattern (CSP) algorithm. The spatial filtering method and decoder combination that performed the best offline was then used online where monkeys controlled cursor velocity using continuous wrist position decoded from epidural field potentials in real time. Main results. Optimized CSP methods improved continuous wrist position decoding accuracy by 69% over CAR and by 80% compared to no filtering. Kalman decoders performed better than linear regression decoders and benefitted from including more spatially-filtered signals but not from pre-smoothing the calculated power spectra. Conversely, linear regression decoders required fewer spatially-filtered signals and were improved by pre-smoothing the power values. The ‘position-to-velocity’ transformation used during online control enabled the animals to generate smooth closed-loop movement trajectories using the somewhat limited position information available in the epidural signals. The monkeys’ online performance significantly improved across days of closed-loop training. Significance. Most published BMI studies that use electrocorticographic signals to decode continuous limb movements either use no spatial filtering or CAR. This study suggests a

  8. Decoding continuous three-dimensional hand trajectories from epidural electrocorticographic signals in Japanese macaques

    Science.gov (United States)

    Shimoda, Kentaro; Nagasaka, Yasuo; Chao, Zenas C.; Fujii, Naotaka

    2012-06-01

    Brain-machine interface (BMI) technology captures brain signals to enable control of prosthetic or communication devices with the goal of assisting patients who have limited or no ability to perform voluntary movements. Decoding of inherent information in brain signals to interpret the user's intention is one of main approaches for developing BMI technology. Subdural electrocorticography (sECoG)-based decoding provides good accuracy, but surgical complications are one of the major concerns for this approach to be applied in BMIs. In contrast, epidural electrocorticography (eECoG) is less invasive, thus it is theoretically more suitable for long-term implementation, although it is unclear whether eECoG signals carry sufficient information for decoding natural movements. We successfully decoded continuous three-dimensional hand trajectories from eECoG signals in Japanese macaques. A steady quantity of information of continuous hand movements could be acquired from the decoding system for at least several months, and a decoding model could be used for ˜10 days without significant degradation in accuracy or recalibration. The correlation coefficients between observed and predicted trajectories were lower than those for sECoG-based decoding experiments we previously reported, owing to a greater degree of chewing artifacts in eECoG-based decoding than is found in sECoG-based decoding. As one of the safest invasive recording methods available, eECoG provides an acceptable level of performance. With the ease of replacement and upgrades, eECoG systems could become the first-choice interface for real-life BMI applications.

  9. CHANNEL ESTIMATION FOR ITERATIVE DECODING OVER FADING CHANNELS

    Institute of Scientific and Technical Information of China (English)

    K. H. Sayhood; Wu Lenan

    2002-01-01

    A method of coherent detection and channel estimation for punctured convolutional coded binary Quadrature Amplitude Modulation (QAM) signals transmitted over a frequency-flat Rayleigh fading channels used for a digital radio broadcasting transmission is presented. Some known symbols are inserted in the encoded data stream to enhance the channel estimation process.The pilot symbols are used to replace the existing parity symbols so no bandwidth expansion is required. An iterative algorithm that uses decoding information as well as the information contained in the known symbols is used to improve the channel parameter estimate. The scheme complexity grows exponentially with the channel estimation filter length. The performance of the system is compared for a normalized fading rate with both perfect coherent detection (corresponding to a perfect knowledge of the fading process and noise variance) and differential detection of Differential Amplitude Phase Shift Keying (DAPSK). The tradeoff between simplicity of implementation and bit-error-rate performance of different techniques is also compared.

  10. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  11. Outage analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2010-09-01

    In this paper, we investigate a dual-hop opportunistic decode-and-forward relaying scheme where the source may or not be able to communicate directly with the destination. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine closed-form outage probability expression for a transmission rate R. Furthermore, we evaluate the asymptotic outage performance and the diversity order is deduced. Unlike existing works where the analysis focused on high signal-to-noise ratio (SNR) regime, such results are important to enable the designers to take decisions regarding practical systems that operate at low SNR regime. We show that performance simulation results coincide with our analytical results under practical assumption of unbalanced hops. © 2010 IEEE.

  12. Generalized instantly decodable network coding for relay-assisted networks

    KAUST Repository

    Elmahdy, Adel M.

    2013-09-01

    In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.

  13. Implants and Decoding for Intracortical Brain Computer Interfaces

    Science.gov (United States)

    Homer, Mark L.; Nurmikko, Arto V.; Donoghue, John P.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) are being developed to enable a person to drive an output device, such as a computer cursor, directly from their neural activity. One goal of the technology is to help people with severe paralysis or limb loss. Key elements of an iBCI are the implanted sensor that records the neural signals and the software which decodes the user’s intended movement from those signals. Here, we focus on recent advances in these two areas, with special attention being placed on contributions that are or may soon be adopted by the iBCI research community. We discuss how these innovations increase the technology’s capability, accuracy, and longevity, all important steps that are expanding the range of possible future clinical applications. PMID:23862678

  14. Sensors and decoding for intracortical brain computer interfaces.

    Science.gov (United States)

    Homer, Mark L; Nurmikko, Arto V; Donoghue, John P; Hochberg, Leigh R

    2013-01-01

    Intracortical brain computer interfaces (iBCIs) are being developed to enable people to drive an output device, such as a computer cursor, directly from their neural activity. One goal of the technology is to help people with severe paralysis or limb loss. Key elements of an iBCI are the implanted sensor that records the neural signals and the software that decodes the user's intended movement from those signals. Here, we focus on recent advances in these two areas, placing special attention on contributions that are or may soon be adopted by the iBCI research community. We discuss how these innovations increase the technology's capability, accuracy, and longevity, all important steps that are expanding the range of possible future clinical applications.

  15. On the Convergence Speed of Turbo Demodulation with Turbo Decoding

    CERN Document Server

    Haddad, Salim; Jezequel, Michel

    2012-01-01

    Iterative processing is widely adopted nowadays in modern wireless receivers for advanced channel codes like turbo and LDPC codes. Extension of this principle with an additional iterative feedback loop to the demapping function has proven to provide substantial error performance gain. However, the adoption of iterative demodulation with turbo decoding is constrained by the additional implied implementation complexity, heavily impacting latency and power consumption. In this paper, we analyze the convergence speed of these combined two iterative processes in order to determine the exact required number of iterations at each level. Extrinsic information transfer (EXIT) charts are used for a thorough analysis at different modulation orders and code rates. An original iteration scheduling is proposed reducing two demapping iterations with reasonable performance loss of less than 0.15 dB. Analyzing and normalizing the computational and memory access complexity, which directly impact latency and power consumption, ...

  16. Fast decoding of codes from algebraic plane curves

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1992-01-01

    Improvement to an earlier decoding algorithm for codes from algebraic geometry is presented. For codes from an arbitrary regular plane curve the authors correct up to d*/2-m2 /8+m/4-9/8 errors, where d* is the designed distance of the code and m is the degree of the curve. The complexity of finding...... the error locator is O(n7/3 ), where n is the length of the code. For codes from Hermitian curves the complexity of finding the error values, given the error locator, is O(n2), and the same complexity can be obtained in the general case if only d*/2-m2/2 errors are corrected...

  17. Exact performance analysis of decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-06-01

    In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the source may or may not be able to communicate directly with the destination. In our study, we consider a regenerative relaying scheme in which the decision to cooperate takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive an exact closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact statistics of each hop. Unlike existing works where the analysis focused on high signal-to-noise ratio (SNR) regime, such results are important to enable the designers to take decisions regarding practical systems that operate at low SNR regime. We show that performance simulation results coincide with our analytical results.

  18. Modulated laser radar decoding by inter symbol interference

    Science.gov (United States)

    Mao, Xuesong; Inoue, Daisuke; Matsubara, Hiroyuki; Kagami, Manabu

    2011-03-01

    Pseudo Random Noise (PN) coded laser radar can improve the target detection ability without the demand on high power laser. However, the reflected echoes are generally so weak that they are buried in the thermal noise of the receiver, which raises the problem of choosing an optimal threshold for correctly decoding them since the power of echoes varies from time to time, and the voltage of light generated electrical signal by photo diode (PD) is always positive. In this work, we firstly show the problem we are going to discuss. Then, a novel method basing on Inter Symbol Interference (ISI) is proposed for solving the problem. Next, numerical simulations and experiments are performed to validate the method. Finally, we discuss the obtained results theoretically.

  19. GENETIC ALGORITHM FOR DECODING LINEAR CODES OVER AWGN AND FADING CHANNELS

    Directory of Open Access Journals (Sweden)

    H. BERBIA

    2011-08-01

    Full Text Available This paper introduces a decoder for binary linear codes based on Genetic Algorithm (GA over the Gaussian and Rayleigh flat fading channel. The performances and compututional complexity of our decoder applied to BCH and convolutional codes are good compared to Chase-2 and Viterbi algorithm respectively. It show that our algorithm is less complex for linear block codes of large block length; furthermore it's performances can be improved by tuning the decoder's parameters, in particular the number of individuals by population and the number of generations

  20. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  1. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  2. A New Linguistic Decoding Method for Online Handwritten Chinese Character Recognition

    Institute of Scientific and Technical Information of China (English)

    徐志明; 王晓龙

    2000-01-01

    This paper presents a new linguistic decoding method for online handwritten Chinese character recognition. The method employs a hybrid language model which combines N-gram and linguistic rules by rule quantification technique.The linguistic decoding algorithm consists of three stages: word lattice construction,the optimal sentence hypothesis search and self-adaptive learning mechanism. The technique has been applied to palmtop computer's online handwritten Chinese character recognition. Samples containing millions of characters were used to test the linguistic decoder. In the open experiment, accuracy rate up to 92% is achieved, and the error rate is reduced by 68%.

  3. The Highest Expected Reward Decoding for HMMs with Application to Recombination Detection

    CERN Document Server

    Nánási, Michal; Brejová, Broňa

    2010-01-01

    Hidden Markov models are traditionally decoded by the Viterbi algorithm which finds the highest probability state path in the model. In recent years, several limitations of the Viterbi decoding have been demonstrated, and new algorithms have been developed to address them \\citep{Kall2005,Brejova2007,Gross2007,Brown2010}. In this paper, we propose a new efficient highest expected reward decoding algorithm (HERD) that allows for uncertainty in boundaries of individual sequence features. We demonstrate usefulness of our approach on jumping HMMs for recombination detection in viral genomes.

  4. Quantum List Decoding from Quantumly Corrupted Codewords for Classical Block Codes of Polynomially Small Rate

    CERN Document Server

    Yamakami, T

    2006-01-01

    Our task of quantum list decoding for a classical block code is to recover from a given quantumly corrupted codeword a short list containing all messages whose codewords have high "presence" in this quantumly corrupted codeword. All known families of efficiently quantum list decodable codes, nonetheless, have exponentially-small message rate. We show that certain generalized Reed-Solomon codes concatenated with Hadamard codes of polynomially-small rate and constant codeword alphabet size have efficient quantum list decoding algorithms, provided that target codewords should have relatively high presence in a given quantumly corrupted codeword.

  5. Using Social Scientific Criteria to Evaluate Cultural Theories: Encoding/Decoding Evaluated

    Directory of Open Access Journals (Sweden)

    Evan L. Kropp

    2015-12-01

    Full Text Available This article transcends the issue of conflicting theoretical schools of thought to formulate a method of social scientific style theory evaluation for cultural studies. It is suggested that positivist social scientific models of theory critique can be used to assess cultural models of communication to determine if they should be classified as theories. A set of evaluation criteria is formulated as a guide and applied to Stuart Hall’s Encoding/Decoding to determine if it is a theory. Conclusions find the sharing of criteria between schools of thought is judicious, Encoding/Decoding fits the established criteria, and Encoding/Decoding should be referred to as a theory.

  6. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  7. A Single Core Hardware Approach of MPEG Audio Decoder for Real-Time Transmission

    Directory of Open Access Journals (Sweden)

    M.B.I. Reaz

    2012-04-01

    Full Text Available The decoding of the voice audio bit stream is an issue in terms of real-time transmission of high quality voice audio over the Internet. A stand-alone chip to perform decoding is a better solution over software approach. The MPEG audio compression provides high compression with minimal loss. This study describes a VHDL model of MPEG audio layer 1 decoder that perform concurrent processing while receiving voice quality audio input bit stream at a constant bit rate and simultaneously producing a stream of 8-bit monopole PCM samples at a constant sampling frequency in real time.

  8. A symbol-by-symbol decoding algorithm of 3GPP MBMS Raptor

    Science.gov (United States)

    Shi, Dongxin; Sun, Xiangran; Yang, Zhanxin; Niu, Lipi

    2013-03-01

    This paper presents a symbol-by-symbol decoding algorithm of 3GPP MBMS Raptor. We redefine the initial matrix of 3GPP MBMS Raptor, and add some ancillary information to help make up for destruction of linear relationship in matrix caused by advanced Gauss elimination in 3GPP MBMS Raptor. So we can realize a correct decoding by symbolby- symbol, while 3GPP can not. The proposed algorithm is adapted to an erasure channel with large symbols, low code rate, big time delay or high error probability , and it can greatly improve decoding efficiency.

  9. Decoding individual natural scene representations during perception and imagery

    Directory of Open Access Journals (Sweden)

    Matthew Robert Johnson

    2014-02-01

    Full Text Available We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA using a support vector machine (SVM classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA, parahippocampal place area (PPA, retrosplenial cortex (RSC, and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS. Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex. Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant component of one’s mental representation of visual scenes.

  10. Mapping of H.264 decoding on a multiprocessor architecture

    Science.gov (United States)

    van der Tol, Erik B.; Jaspers, Egbert G.; Gelderblom, Rob H.

    2003-05-01

    Due to the increasing significance of development costs in the competitive domain of high-volume consumer electronics, generic solutions are required to enable reuse of the design effort and to increase the potential market volume. As a result from this, Systems-on-Chip (SoCs) contain a growing amount of fully programmable media processing devices as opposed to application-specific systems, which offered the most attractive solutions due to a high performance density. The following motivates this trend. First, SoCs are increasingly dominated by their communication infrastructure and embedded memory, thereby making the cost of the functional units less significant. Moreover, the continuously growing design costs require generic solutions that can be applied over a broad product range. Hence, powerful programmable SoCs are becoming increasingly attractive. However, to enable power-efficient designs, that are also scalable over the advancing VLSI technology, parallelism should be fully exploited. Both task-level and instruction-level parallelism can be provided by means of e.g. a VLIW multiprocessor architecture. To provide the above-mentioned scalability, we propose to partition the data over the processors, instead of traditional functional partitioning. An advantage of this approach is the inherent locality of data, which is extremely important for communication-efficient software implementations. Consequently, a software implementation is discussed, enabling e.g. SD resolution H.264 decoding with a two-processor architecture, whereas High-Definition (HD) decoding can be achieved with an eight-processor system, executing the same software. Experimental results show that the data communication considerably reduces up to 65% directly improving the overall performance. Apart from considerable improvement in memory bandwidth, this novel concept of partitioning offers a natural approach for optimally balancing the load of all processors, thereby further improving the

  11. Decoding the Locus of Covert Visuospatial Attention from EEG Signals

    Science.gov (United States)

    Thiery, Thomas; Lajnef, Tarek; Jerbi, Karim; Arguin, Martin; Aubin, Mercedes; Jolicoeur, Pierre

    2016-01-01

    Visuospatial attention can be deployed to different locations in space independently of ocular fixation, and studies have shown that event-related potential (ERP) components can effectively index whether such covert visuospatial attention is deployed to the left or right visual field. However, it is not clear whether we may obtain a more precise spatial localization of the focus of attention based on the EEG signals during central fixation. In this study, we used a modified Posner cueing task with an endogenous cue to determine the degree to which information in the EEG signal can be used to track visual spatial attention in presentation sequences lasting 200 ms. We used a machine learning classification method to evaluate how well EEG signals discriminate between four different locations of the focus of attention. We then used a multi-class support vector machine (SVM) and a leave-one-out cross-validation framework to evaluate the decoding accuracy (DA). We found that ERP-based features from occipital and parietal regions showed a statistically significant valid prediction of the location of the focus of visuospatial attention (DA = 57%, p < .001, chance-level 25%). The mean distance between the predicted and the true focus of attention was 0.62 letter positions, which represented a mean error of 0.55 degrees of visual angle. In addition, ERP responses also successfully predicted whether spatial attention was allocated or not to a given location with an accuracy of 79% (p < .001). These findings are discussed in terms of their implications for visuospatial attention decoding and future paths for research are proposed. PMID:27529476

  12. A Dynamic Assignment of Extrinsic Information Distribution by a Frequency Means for Iterative Turbo Decoder

    Institute of Scientific and Technical Information of China (English)

    YANGFengfan

    2003-01-01

    In this paper, a new strategy for iterative turbo decoding is proposed, where a Generalized Gaussian distribution (GGD) is applied to model the statistical distributions of the extrinsic information generated by the component decoders. An Euclidean distance criterion is also introduced to choose a most likely candidate of the extrinsic information distribution before the each constituent decoding by a novel frequency means. The simulation results show that performance of an iterative decoder with this new technique surpasses the conventional Gaussian solution for the extrinsic information under the same channel conditions. Meanwhile, the evolution process of the extrinsic information density function can be tracked from iteration to iteration in the sense of the proposed criterion.

  13. Integrating induced probability into decoding for large vocabulary continuous speech recognition

    Institute of Scientific and Technical Information of China (English)

    YANG Zhanlei; LIU Wenju; CHAO Hao

    2012-01-01

    This paper integrates location information of frames into conventional acoustic model (AM) and language model (LM) likelihoods, in order to distinguish potential path can- didates more precisely at decoding stage. This paper proposes an induced probability, which represents location information of frames within the whole acoustic space. By integrating the induced probability, the decoder is directed to search within the most promising regions of acoustic space. Promising paths are enhanced and unlikely paths are weakened. Experiments conducted on Chinese Putonghua show that the character error rate is reduced by 10.95% rel- atively without increasing decoding complexity significantly. Finally, pruning analysis shows that integrating location information of frames into traditional decoding framework is helpful for improving system performance.

  14. Multiple-Symbol Decision-Feedback Space-Time Differential Decoding in Fading Channels

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2002-03-01

    Full Text Available Space-time differential coding (STDC is an effective technique for exploiting transmitter diversity while it does not require the channel state information at the receiver. However, like conventional differential modulation schemes, it exhibits an error floor in fading channels. In this paper, we develop an STDC decoding technique based on multiple-symbol detection and decision-feedback, which makes use of the second-order statistic of the fading processes and has a very low computational complexity. This decoding method can significantly lower the error floor of the conventional STDC decoding algorithm, especially in fast fading channels. The application of the proposed multiple-symbol decision-feedback STDC decoding technique in orthogonal frequency-division multiplexing (OFDM system is also discussed.

  15. Coding/Decoding and Reversibility of Droplet Trains in Microfluidic Networks

    National Research Council Canada - National Science Library

    Michael J. Fuerstman; Piotr Garstecki; George M. Whitesides

    2007-01-01

    .... The encoding/decoding device is a functional microfluidic system that requires droplets to navigate a network in a precise manner without the use of valves, switches, or other means of external control.

  16. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    Science.gov (United States)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  17. Enhancing the Error Correction of Finite Alphabet Iterative Decoders via Adaptive Decimation

    CERN Document Server

    Planjery, Shiva Kumar; Declercq, David

    2012-01-01

    Finite alphabet iterative decoders (FAIDs) for LDPC codes were recently shown to be capable of surpassing the Belief Propagation (BP) decoder in the error floor region on the Binary Symmetric channel (BSC). More recently, the technique of decimation which involves fixing the values of certain bits during decoding, was proposed for FAIDs in order to make them more amenable to analysis while maintaining their good performance. In this paper, we show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. We describe the adaptive decimation scheme particularly for 7-level FAIDs which propagate only 3-bit messages and provide numerical results for column-weight three codes. Analysis suggests that the failures of the new decoders are linked to stopping sets ...

  18. A New Decoding Scheme for Errorless Codes for Overloaded CDMA with Active User Detection

    CERN Document Server

    Mousavi, Ali; Marvasti, Farokh

    2010-01-01

    Recently, a new class of binary codes for overloaded CDMA systems are proposed that not only has the ability of errorless communication but also suitable for detecting active users. These codes are called COWDA [1]. In [1], a Maximum Likelihood (ML) decoder is proposed for this class of codes. Although the proposed scheme of coding/decoding show impressive performance, the decoder can be improved. In this paper by assuming more practical conditions for the traffic in the system, we suggest an algorithm that increases the performance of the decoder several orders of magnitude (the Bit-Error-Rate (BER) is divided by a factor of 400 in some Eb/N0's The algorithm supposes the Poison distribution for the time of activation/deactivation of the users.

  19. Modified Maximum A Posteriori Algorithm For Iterative Decoding of Turbo codes

    Directory of Open Access Journals (Sweden)

    Prof M. Srinivasa Rao

    2011-08-01

    Full Text Available Turbo codes are one of the most powerful error correcting codes. What makes these codes so powerful is the use of the so-called iterative decoding or turbo decoding. An iterative decoding process is an iterative learning process for a complex system where the objective is to provide a good suboptimal estimate of a desired signal. Iterative decoding is used when the true optimal estimation is impossible due to prohibitive computational complexities. This paper extends the mathematical derivation of the original MAP algorithm and shows log likelihood values can be computed differently. The proposed algorithm results in savings in the required memory size and leads to a power efficient implementation of MAP algorithm in channel coding.

  20. A Novel High-Speed Configurable Viterbi Decoder for Broadband Access

    Directory of Open Access Journals (Sweden)

    Benaissa Mohammed

    2003-01-01

    Full Text Available A novel design and implementation of an online reconfigurable Viterbi decoder is proposed, based on an area-efficient add-compare-select (ACS architecture, in which the constraint length and traceback depth can be dynamically reconfigured. A design-space exploration to trade off decoding capability, area, and decoding speed has been performed, from which the maximum level of pipelining against the number of ACS units to be used has been determined while maintaining an in-place path metric updating. An example design with constraint lengths from 7 to 10 and a 5-level ACS pipelining has been successfully implemented on a Xilinx Virtex FPGA device. FPGA implementation results, in terms of decoding speed, resource usage, and BER, have been obtained using a tailored testbench. These confirmed the functionality and the expected higher speeds and lower resources.

  1. VARIABLE NON-UNIFORM QUANTIZED BELIEF PROPAGATION ALGORITHM FOR LDPC DECODING

    Institute of Scientific and Technical Information of China (English)

    Liu Binbin; Bai Dong; Mei Shunliang

    2008-01-01

    Non-uniform quantization for messages in Low-Density Parity-Check (LDPC) decoding can reduce implementation complexity and mitigate performance loss. But the distribution of messages varies in the iterative decoding. This letter proposes a variable non-uniform quantized Belief Propaga- tion (BP) algorithm. The BP decoding is analyzed by density evolution with Gaussian approximation. Since the probability density of messages can be well approximated by Gaussian distribution, by the unbiased estimation of variance, the distribution of messages can be tracked during the iteration. Thus the non-uniform quantization scheme can be optimized to minimize the distortion. Simulation results show that the variable non-uniform quantization scheme can achieve better error rate performance and faster decoding convergence than the conventional non-uniform quantization and uniform quantization schemes.

  2. Decoding development in the XXI century: subjectivity, complexity, sinapsis, sinergy, recursivity, lidership and territorial dependency

    National Research Council Canada - National Science Library

    Sergio Boisier

    2010-01-01

      BOISIER, Sergio. Decoding development in the XXI century: subjectivity, complexity, sinapsis, sinergy, recursivity, lidership and territorial dependency. Semest. Econ. [online]. 2010, vol.13, n.27, pp. 11-37. ISSN 0120-6346...

  3. Low-Power Maximum a Posteriori (MAP Algorithm for WiMAX Convolutional Turbo Decoder

    Directory of Open Access Journals (Sweden)

    Chitralekha Ngangbam

    2013-05-01

    Full Text Available We propose to design a Low-Power Memory-Reduced Traceback MAP iterative decoding of convolutional turbo code (CTC which has large data access with large memories consumption and verify the functionality by using simulation tool. The traceback maximum a posteriori algorithm (MAP decoding provides the best performance in terms of bit error rate (BER and reduce the power consumption of the state metric cache (SMC without losing the correction performance. The computation and accessing of different metrics reduce the size of the SMC with no requires complicated reversion checker, path selection, and reversion flag cache. Radix-2*2 and radix-4 traceback structures provide a tradeoff between power consumption and operating frequency for double-binary (DB MAP decoding. These two traceback structures achieve an around 25% power reduction of the SMC, and around 12% power reduction of the DB MAP decoders for WiMAX standard

  4. VHDL Implementation of different Turbo Encoder using Log-MAP Decoder

    CERN Document Server

    Gupta, Akash Kumar

    2010-01-01

    Turbo code is a great achievement in the field of communication system. It can be created by connecting a turbo encoder and a decoder serially. A Turbo encoder is build with parallel concatenation of two simple convolutional codes. By varying the number of memory element (encoder configuration), code rate (1/2 or 1/3), block size of data and iteration, we can achieve better BER performance. Turbo code also consists of interleaver unit and its BER performance also depends on interleaver size. Turbo Decoder can be implemented using different algorithm, but Log -MAP decoding algorithm is less computationaly complex with respect to MAP (maximux a posteriori) algorithm, without compromising its BER performance, nearer to Shannon limit. A register transfer level (RTL) turbo encoder is designed and simulated using VHDL (Very high speed integrated circuit Hardware Description Language). In this paper VHDL model of different turbo encoder are implemented using Log MAP decoder and its performance are compared and verif...

  5. A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance

    Science.gov (United States)

    Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.

    2013-02-01

    In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.

  6. A low-power VLSI implementation for variable length decoder in MPEG-1 Layer III

    Science.gov (United States)

    Tsai, Tsung-Han; Liu, Chun-Nan; Chen, Wen-Cheng

    2004-04-01

    MPEG Layer III (MP3) audio coding algorithm is a widely used audio coding standard. It involves several complex coding techniques and is therefore difficult to create an efficient architecture design. The variable length decoding (VLD) e.g. Huffman decoding, is an important part of MP3, which needs great amount of search and memory access operations. In this paper a data driven variable length decoding algorithm is presented, which exploits the signal statistics of variable length codes to reduce power and a two-level table lookup method is presented. The decoder was designed based on simplicity and low-cost, low power consumption while retaining the high efficiency requirements. The total power saving is about 67%.

  7. A pipelined Reed-Solomon decoder based on a modified step-by-step algorithm

    Institute of Scientific and Technical Information of China (English)

    Xing-ru PENG; Wei ZHANG; Yan-yan LIU

    2016-01-01

    We propose a pipelined Reed-Solomon (RS) decoder for an ultra-wideband system using a modified step-by-step algorithm. To reduce the complexity, the modified step-by-step algorithm merges two cases of the original algorithm. The pipelined structure allows the decoder to work at high rates with minimum delay. Consequently, for RS(23,17) codes, the proposed architecture requires 42.5%and 24.4%less area compared with a modified Euclidean architecture and a pipelined degree-computationless modified Euclidean architecture, respectively. The area of the proposed decoder is 11.3% less than that of the previous step-by-step decoder with a lower critical path delay.

  8. High Speed Versatile Reed-Solomon Decoder for Correcting Errors and Erasures

    Institute of Scientific and Technical Information of China (English)

    WANG Hua; FAN Guang-rong; WANG Ping-qin; KUANG Jing-ming

    2008-01-01

    A new Chien search method for shortened Reed-Solomon (RS) code is proposed, based on this, a versatile RS decoder for correcting both errors and erasures is designed. Compared with the traditional RS decoder, the weighted coefficient of the Chien search method is calculated sequentially through the three pipelined stages of the decoder. And therefore, the computation of the errata locator polynomial and errata evaluator polynomial needs to be modified. The versatile RS decoder with minimum distance 21 has been synthesized in the Xilinx Virtex-Ⅱ series field programmable gate array (FPGA) xc2v1000-5 and is used by concatenated coding system for satellite communication. Results show that the maximum data processing rate can be up to 1.3Gbit/s.

  9. Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes

    CERN Document Server

    Chen, Ning

    2008-01-01

    For the majority of the applications of Reed-Solomon (RS) codes, hard decision decoding is based on syndromes. Recently, there has been renewed interest in decoding RS codes without using syndromes. In this paper, we investigate the complexity of syndromeless decoding for RS codes, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis differs in several aspects from existing asymptotic complexity analysis, which is typically based on multiplicative fast Fourier transform (FFT) techniques and is usually in big O notation. First, we focus on RS codes over characteristic-2 fields, over which some multiplicative FFT techniques are not applicable. Secondly, due to moderate block lengths of RS codes in practice, our analysis is complete since all terms in the complexities are accounted for. Finally, in addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes...

  10. An improved sphere-decoding algorithm for V-BLAST system

    Science.gov (United States)

    Huang, Gengsheng; Zheng, Hui; Shen, Jing; Xue, Yongfei

    2016-10-01

    Sphere-decoding (SD) algorithm is one of important detection scheme for the data detection of multiple-input multiple-output (MIMO) communication systems based on layered space-time structure, but it has a higher complexity in its pre-processing than the Maximum likelihood (ML) algorithm. For good performance and low complexity, an improved sphere-decoding algorithm for MIMO system is proposed by choosing effective search radius. It can reduce the complexity while having grid points in the sphere. The simulation results show that the proposed data detection algorithm for MIMO system has a very close to the traditional Sphere-decoding algorithm in Bit Error Rate (BER) performance, but it has lower complexity than traditional Sphere-decoding algorithm.

  11. Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory

    National Research Council Canada - National Science Library

    Ng, Tse Nga; Schwartz, David E; Lavery, Leah L; Whiting, Gregory L; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer

    2012-01-01

    .... The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices...

  12. Streptomycin binds to the decoding center of 16 S ribosomal RNA.

    Science.gov (United States)

    Spickler, C; Brunelle, M N; Brakier-Gingras, L

    1997-10-31

    Streptomycin, an error-inducing aminoglycoside antibiotic, binds to a single site on the small ribosomal subunit of bacteria, but this site has not yet been defined precisely. Here, we demonstrate that streptomycin binds to E. coli 16 S rRNA in the absence of ribosomal proteins, and protects a set of bases in the decoding region against dimethyl sulfate attack. The binding studies were performed in a high ionic strength buffer containing 20 mM Mg2+. The pattern of protection in the decoding region was similar to that observed when streptomycin binds to the 30 S subunit. However, streptomycin also protects the 915 region of 16 S rRNA within the 30 S subunit, whereas it did not protect the 915 region of the naked 16 S rRNA. The interaction of streptomycin with 16 S rRNA was further defined by using two fragments that correspond to the 3' minor domain of 16 S rRNA and to the decoding analog, a portion of this domain encompassing the decoding center. In the presence of streptomycin, the pattern of protection against dimethyl sulfate attack for the two fragments was similar to that seen with the full-length 16 S rRNA. This indicates that the 3' minor domain as well as the decoding analog contain the recognition signals for the binding of streptomycin. However, streptomycin could not bind to the decoding analog in the absence of Mg2+. This contrasts with neomycin, another error-inducing aminoglycoside antibiotic, that binds to the decoding analog in the absence of Mg2+, but not at 20 mM Mg2+. Our results suggest that both neomycin and streptomycin interact with the decoding center, but recognize alternative conformations of this region.

  13. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    Science.gov (United States)

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can

  14. Decoding individual finger movements from one hand using human EEG signals.

    Directory of Open Access Journals (Sweden)

    Ke Liao

    Full Text Available Brain computer interface (BCI is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG signals, while it remains unclear whether noninvasive electroencephalography (EEG signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA. These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26% in all subjects (p<0.05. The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies.

  15. Decoding individual finger movements from one hand using human EEG signals.

    Science.gov (United States)

    Liao, Ke; Xiao, Ran; Gonzalez, Jania; Ding, Lei

    2014-01-01

    Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (pmovement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies.

  16. JOINT SOURCE-CHANNEL DECODING OF HUFFMAN CODES WITH LDPC CODES

    Institute of Scientific and Technical Information of China (English)

    Mei Zhonghui; Wu Lenan

    2006-01-01

    In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redundancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD).

  17. Euclidean Geometry Codes, minimum weight words and decodable error-patterns using bit-flipping

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn; Jonsson, Bergtor

    2005-01-01

    We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns.......We determine the number of minimum wigth words in a class of Euclidean Geometry codes and link the performance of the bit-flipping decoding algorithm to the geometry of the error patterns....

  18. A Fully Parallel VLSI-implementation of the Viterbi Decoding Algorithm

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1989-01-01

    In this paper we describe the implementation of a K = 7, R = 1/2 single-chip Viterbi decoder intended to operate at 10-20 Mbit/sec. We propose a general, regular and area efficient floor-plan that is also suitable for implementation of decoders for codes with different generator polynomials or di...... above 26 MHz under worst-case conditions (VDD = 4.75 V and TA = 70 °C)....

  19. Partition-Based Hybrid Decoding (PHD: A Class of ML Decoding Schemes for MIMO Signals Based on Tree Partitioning and Combined Depth- and Breadth-First Search

    Directory of Open Access Journals (Sweden)

    J. I. Park

    2013-03-01

    Full Text Available In this paper, we propose a hybrid maximum likelihood (ML decoding scheme for multiple-input multiple-output(MIMO systems. After partitioning the searching tree into several stages, the proposed scheme adopts thecombination of depth- and breadth-first search methods in an organized way. Taking the number of stages, the size ofsignal constellation, and the number of antennas as the parameter of the scheme, we provide extensive simulationresults for various MIMO communication conditions. Numerical results indicate that, when the depth- and breadth-firstsearch methods are employed appropriately, the proposed scheme exhibits substantially lower computationalcomplexity than conventional ML decoders while maintaining the ML bit error performance.

  20. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  1. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Luca Fanucci

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or “hazards” between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of “idle” cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  2. A Low Power VITERBI Decoder Design With Minimum Transition Hybrid Register Exchange Processing For Wireless Applications

    Directory of Open Access Journals (Sweden)

    S. L. Haridas

    2010-12-01

    Full Text Available This work proposes the low power implementation of Viterbi Decoder. Majority of viterbi decoder designs in the past use simple Register Exchange or Traceback method to achieve very high speed and low power decoding respectively, but it suffers from both complex routing and high switching activity.Here simplification is made in survivor memory unit by storing only m-1 bits to identify previous state in the survivor path, and by assigning m-1 registers to decision vectors. This approach eliminates unnecessary shift operations. Also for storing the decoded data only half memory is required than register exchange method. In this paper Hybrid approach that combines both Traceback and Register Exchange schemes has been applied to the viterbi decoder design. By using distance properties of encoder we further modified to minimum transition hybrid register exchange method. It leads to lower dynamic power consumption because of lower switching activity. Dynamic power estimation obtained through gate level simulation indicates that the proposed design reduces the power dissipation of a conventional viterbi decoder design by 30%.

  3. A Low Power VITERBI Decoder Design With Minimum Transition Hybrid Register Exchange Processing For Wireless Applications

    Directory of Open Access Journals (Sweden)

    S. L. Haridas

    2010-12-01

    Full Text Available This work proposes the low power implementation of Viterbi Decoder. Majority of viterbi decoder designs in the past use simple Register Exchange or Trace back method to achieve very high speed and low power decoding respectively, but it suffers from both complex routing and high switching activity.Here simplification is made in survivor memory unit by storing only m-1 bits to identify previous state in the survivor path, and by assigning m-1 registers to decision vectors. This approach eliminates unnecessary shift operations. Also for storing the decoded data only half memory is required than register exchange method. In this paper Hybrid approach that combines both Trace back and Register Exchange schemes has been applied to the viterbi decoder design. By using distance properties of encoder we further modified to minimum transition hybrid register exchange method. It leads to lower dynamic power consumption because of lower switching activity. Dynamic power estimation obtained through gate level simulation indicates that the proposed design reduces the power dissipation of a conventional viterbi decoder design by 30%.

  4. Complexity Analysis of Reed-Solomon Decoding over GF(2m without Using Syndromes

    Directory of Open Access Journals (Sweden)

    Zhiyuan Yan

    2008-06-01

    Full Text Available There has been renewed interest in decoding Reed-Solomon (RS codes without using syndromes recently. In this paper, we investigate the complexity of syndromeless decoding, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis focuses on RS codes over characteristic-2 fields, for which some multiplicative FFT techniques are not applicable. Due to moderate block lengths of RS codes in practice, our analysis is complete, without big O notation. In addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. For high-rate RS codes, when compared to syndrome-based decoding algorithms, not only syndromeless decoding algorithms require more field operations regardless of implementation, but also decoder architectures based on their direct implementations have higher hardware costs and lower throughput. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.

  5. Estimation-theoretic approach to delayed decoding of predictively encoded video sequences.

    Science.gov (United States)

    Han, Jingning; Melkote, Vinay; Rose, Kenneth

    2013-03-01

    Current video coders employ predictive coding with motion compensation to exploit temporal redundancies in the signal. In particular, blocks along a motion trajectory are modeled as an auto-regressive (AR) process, and it is generally assumed that the prediction errors are temporally independent and approximate the innovations of this process. Thus, zero-delay encoding and decoding is considered efficient. This paper is premised on the largely ignored fact that these prediction errors are, in fact, temporally dependent due to quantization effects in the prediction loop. It presents an estimation-theoretic delayed decoding scheme, which exploits information from future frames to improve the reconstruction quality of the current frame. In contrast to the standard decoder that reproduces every block instantaneously once the corresponding quantization indices of residues are available, the proposed delayed decoder efficiently combines all accessible (including any future) information in an appropriately derived probability density function, to obtain the optimal delayed reconstruction per transform coefficient. Experiments demonstrate significant gains over the standard decoder. Requisite information about the source AR model is estimated in a spatio-temporally adaptive manner from a bit-stream conforming to the H.264/AVC standard, i.e., no side information needs to be sent to the decoder in order to employ the proposed approach, thereby compatibility with the standard syntax and existing encoders is retained.

  6. A lossy graph model for delay reduction in generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-06-01

    The problem of minimizing the decoding delay in Generalized instantly decodable network coding (G-IDNC) for both perfect and lossy feedback scenarios is formulated as a maximum weight clique problem over the G-IDNC graph in. In this letter, we introduce a new lossy G-IDNC graph (LG-IDNC) model to further minimize the decoding delay in lossy feedback scenarios. Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graph formulation in all lossy feedback situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packet erasure probability. © 2012 IEEE.

  7. Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2013-10-01

    In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.

  8. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats.

    Science.gov (United States)

    De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro

    2017-01-01

    Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  9. How Major Depressive Disorder affects the ability to decode multimodal dynamic emotional stimuli

    Directory of Open Access Journals (Sweden)

    FILOMENA SCIBELLI

    2016-09-01

    Full Text Available Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs and healthy (HCs subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C, 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video, video (without audio and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy.

  10. Divide & Concur and Difference-Map BP Decoders for LDPC Codes

    CERN Document Server

    Yedidia, Jonathan S; Draper, Stark C

    2010-01-01

    The "Divide and Concur'' (DC) algorithm, recently introduced by Gravel and Elser, can be considered a competitor to the belief propagation (BP) algorithm, in that both algorithms can be applied to a wide variety of constraint satisfaction, optimization, and probabilistic inference problems. We show that DC can be interpreted as a message-passing algorithm on a constraint graph, which helps make the comparison with BP more clear. The "difference-map'' dynamics of the DC algorithm enables it to avoid "traps'' which may be related to the "trapping sets'' or "pseudo-codewords'' that plague BP decoders of low-density parity check (LDPC) codes in the error-floor regime. We investigate two decoders for low-density parity-check (LDPC) codes based on these ideas. The first decoder is based directly on DC, while the second decoder borrows the important "difference-map'' concept from the DC algorithm and translates it into a BP-like decoder. We show that this "difference-map belief propagation'' (DMBP) decoder has drama...

  11. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  12. Extracting duration information in a picture category decoding task using hidden Markov Models

    Science.gov (United States)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  13. The development of working memory from kindergarten to first grade in children with different decoding skills.

    Science.gov (United States)

    Nevo, Einat; Breznitz, Zvia

    2013-02-01

    This study investigated the development of working memory ability (measured by tasks assessing all four working memory components) from the end of kindergarten to the end of first grade-the first year reading is taught in school-and the relationship between working memory abilities in kindergarten and first grade and reading skills in first grade. A sample of 97 children who participated in Nevo and Breznitz's earlier study [Journal of Experimental Child Psychology, 109 (2011) 73-90] were divided into two groups according to their decoding skills, resulting in 24 poor decoders and 73 typical decoders. The entire cohort improved significantly on all of the working memory measures from kindergarten to first grade, with the phonological complex memory at both time points showing the highest correlations with reading skills at first grade. However, there were differences found between the two decoding groups, with poor decoders exhibiting lower working memory abilities in most working memory measures, performing significantly lower on tests of all three reading skills (decoding, reading comprehension, and reading speed), and showing higher correlation coefficients between reading skills. Findings suggest that even before formal teaching of reading begins, it is important to reinforce working memory abilities in order to maximize future reading achievements.

  14. Utilizing Cross-Layer Information to Improve Performance in JPEG2000 Decoding

    Directory of Open Access Journals (Sweden)

    Hannes Persson

    2007-01-01

    Full Text Available We focus on wireless multimedia communication and investigate how cross-layer information can be used to improve performance at the application layer, using JPEG2000 as an example. The cross-layer information is in the form of soft information from the physical layer. The soft information, which is supplied by a soft decision demodulator, yields reliability measures for the received bits and is fed into two soft input iterative JPEG2000 image decoders. When errors are detected with the error detecting mechanisms in JPEG2000, the decoders utilize the soft information to point out likely transmission errors. Hence, the decoders can correct errors and increase the image quality without making time-consuming retransmissions. We believe that the proposed decoding method utilizing soft information is suitable for a general IP-based network and that it keeps the principles of a layered structure of the protocol stack intact. Further, experimental results with images transmitted over a simulated wireless channel show that a simple decoding algorithm that utilizes soft information can give high gains in image quality compared to the standard hard-decision decoding.

  15. LDPC Decoder with an Adaptive Wordwidth Datapath for Energy and BER Co-Optimization

    Directory of Open Access Journals (Sweden)

    Tinoosh Mohsenin

    2013-01-01

    (LDPC decoder using an adaptive wordwidth datapath is presented. The decoder switches between a Normal Mode and a reduced wordwidth Low Power Mode. Signal toggling is reduced as variable node processing inputs change in fewer bits. The duration of time that the decoder stays in a given mode is optimized for power and BER requirements and the received SNR. The paper explores different Low Power Mode algorithms to reduce the wordwidth and their implementations. Analysis of the BER performance and power consumption from fixed-point numerical and post-layout power simulations, respectively, is presented for a full parallel 10GBASE-T LDPC decoder in 65 nm CMOS. A 5.10 mm2 low power decoder implementation achieves 85.7 Gbps while operating at 185 MHz and dissipates 16.4 pJ/bit at 1.3 V with early termination. At 0.6 V the decoder throughput is 9.3 Gbps (greater than 6.4 Gbps required for 10GBASE-T while dissipating an average power of 31 mW. This is 4.6 lower than the state of the art reported power with an SNR loss of 0.35 dB at .

  16. TCP Traffic Control Evaluation and Reduction over Wireless Networks Using Parallel Sequential Decoding Mechanism

    Directory of Open Access Journals (Sweden)

    Ramazan Aygün

    2007-11-01

    Full Text Available The assumption of TCP-based protocols that packet error (lost or damaged is due to network congestion is not true for wireless networks. For wireless networks, it is important to reduce the number of retransmissions to improve the effectiveness of TCP-based protocols. In this paper, we consider improvement at the data link layer for systems that use stop-and-wait ARQ as in IEEE 802.11 standard. We show that increasing the buffer size will not solve the actual problem and moreover it is likely to degrade the quality of delivery (QoD. We firstly study a wireless router system model with a sequential convolutional decoder for error detection and correction in order to investigate QoD of flow and error control. To overcome the problems along with high packet error rate, we propose a wireless router system with parallel sequential decoders. We simulate our systems and provide performance in terms of average buffer occupancy, blocking probability, probability of decoding failure, system throughput, and channel throughput. We have studied these performance metrics for different channel conditions, packet arrival rates, decoding time-out limits, system capacities, and the number of sequential decoders. Our results show that parallel sequential decoders have great impact on the system performance and increase QoD significantly.

  17. TCP Traffic Control Evaluation and Reduction over Wireless Networks Using Parallel Sequential Decoding Mechanism

    Directory of Open Access Journals (Sweden)

    Aygün Ramazan

    2007-01-01

    Full Text Available The assumption of TCP-based protocols that packet error (lost or damaged is due to network congestion is not true for wireless networks. For wireless networks, it is important to reduce the number of retransmissions to improve the effectiveness of TCP-based protocols. In this paper, we consider improvement at the data link layer for systems that use stop-and-wait ARQ as in IEEE 802.11 standard. We show that increasing the buffer size will not solve the actual problem and moreover it is likely to degrade the quality of delivery (QoD. We firstly study a wireless router system model with a sequential convolutional decoder for error detection and correction in order to investigate QoD of flow and error control. To overcome the problems along with high packet error rate, we propose a wireless router system with parallel sequential decoders. We simulate our systems and provide performance in terms of average buffer occupancy, blocking probability, probability of decoding failure, system throughput, and channel throughput. We have studied these performance metrics for different channel conditions, packet arrival rates, decoding time-out limits, system capacities, and the number of sequential decoders. Our results show that parallel sequential decoders have great impact on the system performance and increase QoD significantly.

  18. Partial Decode-Forward Binning Schemes for the Causal Cognitive Relay Channels

    CERN Document Server

    Wu, Zhuohua

    2011-01-01

    The causal cognitive relay channel (CRC) has two sender-receiver pairs, in which the second sender obtains information from the first sender causally and assists the transmission of both senders. In this paper, we study both the full- and half-duplex modes. In each mode, we propose two new coding schemes built successively upon one another to illustrate the impact of different coding techniques. The first scheme called \\emph{partial decode-forward binning} (PDF-binning) combines the ideas of partial decode-forward relaying and Gelfand-Pinsker binning. The second scheme called \\emph{Han-Kobayashi partial decode-forward binning} (HK-PDF-binning) combines PDF-binning with Han-Kobayashi coding by further splitting rates and applying superposition coding, conditional binning and relaxed joint decoding. In both schemes, the second sender decodes a part of the message from the first sender, then uses Gelfand-Pinsker binning technique to bin against the decoded message, but in such a way that allows both state nullif...

  19. Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    Science.gov (United States)

    Guionnet, Thomas; Guillemot, Christine

    2004-12-01

    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy ( excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and[InlineEquation not available: see fulltext.]-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

  20. Efficient decoding with steady-state Kalman filter in neural interface systems.

    Science.gov (United States)

    Malik, Wasim Q; Truccolo, Wilson; Brown, Emery N; Hochberg, Leigh R

    2011-02-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5±0.5 s (mean ±s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25±3 single units by a factor of 7.0±0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems.

  1. Word Decoding Development during Phonics Instruction in Children at Risk for Dyslexia.

    Science.gov (United States)

    Schaars, Moniek M H; Segers, Eliane; Verhoeven, Ludo

    2017-05-01

    In the present study, we examined the early word decoding development of 73 children at genetic risk of dyslexia and 73 matched controls. We conducted monthly curriculum-embedded word decoding measures during the first 5 months of phonics-based reading instruction followed by standardized word decoding measures halfway and by the end of first grade. In kindergarten, vocabulary, phonological awareness, lexical retrieval, and verbal and visual short-term memory were assessed. The results showed that the children at risk were less skilled in phonemic awareness in kindergarten. During the first 5 months of reading instruction, children at risk were less efficient in word decoding and the discrepancy increased over the months. In subsequent months, the discrepancy prevailed for simple words but increased for more complex words. Phonemic awareness and lexical retrieval predicted the reading development in children at risk and controls to the same extent. It is concluded that children at risk are behind their typical peers in word decoding development starting from the very beginning. Furthermore, it is concluded that the disadvantage increased during phonics instruction and that the same predictors underlie the development of word decoding in the two groups of children. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Implementation Of Decoders for LDPC Block Codes and LDPC Convolutional Codes Based on GPUs

    CERN Document Server

    Zhao, Yue

    2012-01-01

    With the use of belief propagation (BP) decoding algorithm, low-density parity-check (LDPC) codes can achieve near-Shannon limit performance. LDPC codes can accomplish bit error rates (BERs) as low as $10^{-15}$ even at a small bit-energy-to-noise-power-spectral-density ratio ($E_{b}/N_{0}$). In order to evaluate the error performance of LDPC codes, simulators running on central processing units (CPUs) are commonly used. However, the time taken to evaluate LDPC codes with very good error performance is excessive. For example, assuming 30 iterations are used in the decoder, our simulation results have shown that it takes a modern CPU more than 7 days to arrive at a BER of 10^{-6} for a code with length 18360. In this paper, efficient LDPC block-code decoders/simulators which run on graphics processing units (GPUs) are proposed. Both standard BP decoding algorithm and layered decoding algorithm are used. We also implement the decoder for the LDPC convolutional codes (LDPCCC). The LDPCCC is derived from a pre-de...

  3. Online decoding of object-based attention using real-time fMRI.

    Science.gov (United States)

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Complexity Analysis of Reed-Solomon Decoding over GF without Using Syndromes

    Directory of Open Access Journals (Sweden)

    Chen Ning

    2008-01-01

    Full Text Available Abstract There has been renewed interest in decoding Reed-Solomon (RS codes without using syndromes recently. In this paper, we investigate the complexity of syndromeless decoding, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis focuses on RS codes over characteristic-2 fields, for which some multiplicative FFT techniques are not applicable. Due to moderate block lengths of RS codes in practice, our analysis is complete, without big notation. In addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. For high-rate RS codes, when compared to syndrome-based decoding algorithms, not only syndromeless decoding algorithms require more field operations regardless of implementation, but also decoder architectures based on their direct implementations have higher hardware costs and lower throughput. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.

  5. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  6. Mathematics is differentially related to reading comprehension and word decoding: Evidence from a genetically-sensitive design

    Science.gov (United States)

    Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert

    2013-01-01

    Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294

  7. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-12-21

    The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.

  8. CHANNEL ESTIMATION FOR ITERATIVE DECODING OVER FADING CHANNELS

    Institute of Scientific and Technical Information of China (English)

    K.H.Sayhood; WuLenan

    2002-01-01

    A method of coherent detection and channel estimation for punctured convolutional coded binary Quadrature Amplitude Modulation (QAM) signals transmitted over a frequency-flat Rayleigh fading channels used for a digital radio broadcasting transmission is presented.Some known symbols are inserted in the encoded data stream to enhance the channel estimation process.The puilot symbols are used to replace the existing parity symbols so no bandwidth expansion is required.An iterative algorithm that uses decoding information as well as the information contained in the known symbols is used to improve the channel parameter estimate.The scheme complexity grows exponentially with the channel estimation filter length,The performance of the system is compared for a normalized fading rate with both perfect coherent detection(Corresponding to a perfect knowledge of the fading process and noise variance)and differential detection of Differential Amplitude Phase Shift Keying (DAPSK).The tradeoff between simplicity of implementation and bit-error-rate performance of different techniques is also compared.

  9. A Generalized Ideal Observer Model for Decoding Sensory Neural Responses

    Directory of Open Access Journals (Sweden)

    Gopathy ePurushothaman

    2013-09-01

    Full Text Available We show that many ideal observer models used to decode neural activity can be generalizedto a conceptually and analytically simple form. This enables us to study the statisticalproperties of this class of ideal observer models in a unified manner. We consider in detailthe problem of estimating the performance of this class of models. We formulate the problemde novo by deriving two equivalent expressions for the performance and introducing the correspondingestimators. We obtain a lower bound on the number of observations (N requiredfor the estimate of the model performance to lie within a specified confidence interval at aspecified confidence level. We show that these estimators are unbiased and consistent, withvariance approaching zero at the rate of 1/N. We find that the maximum likelihood estimatorfor the model performance is not guaranteed to be the minimum variance estimator even forsome simple parametric forms (e.g., exponential of the underlying probability distributions.We discuss the application of these results for designing and interpreting neurophysiologicalexperiments that employ specific instances of this ideal observer model.

  10. Decoding Reed-Muller Codes beyond Half the Minimum Distance

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen; Jakobsen, Thomas

    1999-01-01

    Inspired by Sudan's recent algorithm for Reed-Solomon codes we propose an efficient method for decoding $r$-th order Reed-Muller codes of length $2^m$ which can correct errors beyond half the minimum distance.This procedure involves interpolating $Q\\in\\ff_2[x_1,\\ldots,x_m,y]$, a polynomial...... a novel, yet simple polynomial-time factorization algorithm for multivariate boolean polynomials that produces generators for the coset of factors.Let $p=2^{-\\lambda}$ be the probability of algorithm failure and assume that the weights of a Reed-Muller code are approximately binomially distributed....... This assumption is supported by known weight distributions for some short Reed-Muller codes. Then with probability at least $1-p$, the algorithm corrects\\begin{equation}\\label{eq_cond}\\tau\\leq\\max_{0\\leq\\rho\\leq m}\\,\\min\\left\\{2^m-\\sum_{i=0}^{r+\\rho} \\binom{m}{i}-\\lambda,\\sum_{i=0}^\\rho\\binom{m}{i}-1\\right...

  11. Toward a stereoscopic encoder/decoder for digital cinema

    Science.gov (United States)

    Bensalma, Rafik; Larabi, Mohamed-Chaker

    2008-02-01

    The digital cinema is very challenging because it represents tomorrow way of capturing, post-producing and projecting movies. Specifications on this media are provided by DCI (Digital Cinema Initiatives) founded by the Hollywood Majors. Among the specifications we can find issues about resolution, bitrate, JPEG2000 compression Moreover, the market assumes that 3D could raise the turnover of cinema industry. The problem with is the availability of 2 streams (left and right) that double the amount of data and need adapted devices to decode and project movies. Cinema industry, represented by the stereoscopic group in SMPTE has expressed the need of having a unique master that combines two streams in one. This paper focuses on the study of the generation of a master with one of the streams and the embedment of the redundant information as metadata in JPEG2000 code-stream or MXF. The idea is to use the reference image in addition to some metadata to reconstruct the target image. The metadata represent the residual image and the contours description. Quality of reconstructed images depends on the compression ratio of the residual image. The obtained results are encouraging and the choice between JPEG2000 metadata embedding and MXF metadata still to be done.

  12. Storage Enforcement with Kolmogorov Complexity and List Decoding

    CERN Document Server

    Husain, Mohammad Iftekhar; Rudra, Atri; Uurtamo, Steve

    2011-01-01

    We consider the following problem that arises in outsourced storage: a user stores her data $x$ on a remote server but wants to audit the server at some later point to make sure it actually did store $x$. The goal is to design a (randomized) verification protocol that has the property that if the server passes the verification with some reasonably high probability then the user can rest assured that the server is storing $x$. In this work we present an optimal solution (in terms of the user's storage and communication) while at the same time ensuring that a server that passes the verification protocol with any reasonable probability will store, to within a small \\textit{additive} factor, $C(x)$ bits of information, where $C(x)$ is the plain Kolmogorov complexity of $x$. (Since we cannot prevent the server from compressing $x$, $C(x)$ is a natural upper bound.) The proof of security of our protocol combines Kolmogorov complexity with list decoding and unlike previous work that relies upon cryptographic assumpt...

  13. Competitive minimax universal decoding for several ensembles of random codes

    CERN Document Server

    Akirav, Yaniv

    2007-01-01

    Universally achievable error exponents pertaining to certain families of channels (most notably, discrete memoryless channels (DMC's)), and various ensembles of random codes, are studied by combining the competitive minimax approach, proposed by Feder and Merhav, with Chernoff bound and Gallager's techniques for the analysis of error exponents. In particular, we derive a single--letter expression for the largest, universally achievable fraction $\\xi$ of the optimum error exponent pertaining to the optimum ML decoding. Moreover, a simpler single--letter expression for a lower bound to $\\xi$ is presented. To demonstrate the tightness of this lower bound, we use it to show that $\\xi=1$, for the binary symmetric channel (BSC), when the random coding distribution is uniform over: (i) all codes (of a given rate), and (ii) all linear codes, in agreement with well--known results. We also show that $\\xi=1$ for the uniform ensemble of systematic linear codes, and for that of time--varying convolutional codes in the bit...

  14. Optimization of speed and accuracy of decoding in translation.

    Science.gov (United States)

    Wohlgemuth, Ingo; Pohl, Corinna; Rodnina, Marina V

    2010-11-03

    The speed and accuracy of protein synthesis are fundamental parameters for understanding the fitness of living cells, the quality control of translation, and the evolution of ribosomes. In this study, we analyse the speed and accuracy of the decoding step under conditions reproducing the high speed of translation in vivo. We show that error frequency is close to 10⁻³, consistent with the values measured in vivo. Selectivity is predominantly due to the differences in k(cat) values for cognate and near-cognate reactions, whereas the intrinsic affinity differences are not used for tRNA discrimination. Thus, the ribosome seems to be optimized towards high speed of translation at the cost of fidelity. Competition with near- and non-cognate ternary complexes reduces the rate of GTP hydrolysis in the cognate ternary complex, but does not appreciably affect the rate-limiting tRNA accommodation step. The GTP hydrolysis step is crucial for the optimization of both the speed and accuracy, which explains the necessity for the trade-off between the two fundamental parameters of translation.

  15. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    OpenAIRE

    Liu Yu-Sun; Tsai Yao-Yu

    2011-01-01

    Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates...

  16. Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices.

    Science.gov (United States)

    Schaffelhofer, Stefan; Agudelo-Toro, Andres; Scherberger, Hansjörg

    2015-01-21

    Despite recent advances in decoding cortical activity for motor control, the development of hand prosthetics remains a major challenge. To reduce the complexity of such applications, higher cortical areas that also represent motor plans rather than just the individual movements might be advantageous. We investigated the decoding of many grip types using spiking activity from the anterior intraparietal (AIP), ventral premotor (F5), and primary motor (M1) cortices. Two rhesus monkeys were trained to grasp 50 objects in a delayed task while hand kinematics and spiking activity from six implanted electrode arrays (total of 192 electrodes) were recorded. Offline, we determined 20 grip types from the kinematic data and decoded these hand configurations and the grasped objects with a simple Bayesian classifier. When decoding from AIP, F5, and M1 combined, the mean accuracy was 50% (using planning activity) and 62% (during motor execution) for predicting the 50 objects (chance level, 2%) and substantially larger when predicting the 20 grip types (planning, 74%; execution, 86%; chance level, 5%). When decoding from individual arrays, objects and grip types could be predicted well during movement planning from AIP (medial array) and F5 (lateral array), whereas M1 predictions were poor. In contrast, predictions during movement execution were best from M1, whereas F5 performed only slightly worse. These results demonstrate for the first time that a large number of grip types can be decoded from higher cortical areas during movement preparation and execution, which could be relevant for future neuroprosthetic devices that decode motor plans.

  17. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Kalberlah, Christian; Haynes, John-Dylan; Friederici, Angela D

    2012-05-01

    The present magnetoencephalography study investigated whether the brain states of early syntactic and auditory-perceptual processes can be decoded from single-trial recordings with a multivariate pattern classification approach. In particular, it was investigated whether the early neural activation patterns in response to rule violations in basic auditory perception and in high cognitive processes (syntax) reflect a functional organization that largely generalizes across individuals or is subject-specific. On this account, subjects were auditorily presented with correct sentences, syntactically incorrect sentences, correct sentences including an interaural time difference change, and sentences containing both violations. For the analysis, brain state decoding was carried out within and across subjects with three pairwise classifications. Neural patterns elicited by each of the violation sentences were separately classified with the patterns elicited by the correct sentences. The results revealed the highest decoding accuracies over temporal cortex areas for all three classification types. Importantly, both the magnitude and the spatial distribution of decoding accuracies for the early neural patterns were very similar for within-subject and across-subject decoding. At the same time, across-subject decoding suggested a hemispheric bias, with the most consistent patterns in the left hemisphere. Thus, the present data show that not only auditory-perceptual processing brain states but also cognitive brain states of syntactic rule processing can be decoded from single-trial brain activations. Moreover, the findings indicate that the neural patterns in response to syntactic cognition and auditory perception reflect a functional organization that is highly consistent across individuals.

  18. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    Directory of Open Access Journals (Sweden)

    Edouard Gentaz

    Full Text Available Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness. Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively. The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55% than in the two other groups (average decoders: 7%; good decoders: 0% and only 6 children (1.5% had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  19. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    Science.gov (United States)

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  20. Decoding the ERD/ERS: influence of afferent input induced by a leg assistive robot

    Directory of Open Access Journals (Sweden)

    Giuseppe eLisi

    2014-05-01

    Full Text Available This paper investigates the influence of the leg afferent input, induced by a leg assistive robot, on the decoding performance of a BMI system. Specifically, it focuses on a decoder based on the event-related (desynchronization (ERD/ERS of the sensorimotor area. The EEG experiment, performed with healthy subjects, is structured as a 3x2 factorial design, consisting of two factors: 'finger tapping task' and 'leg condition'. The former is divided into three levels (BMI classes, being left hand finger tapping, right hand finger tapping and no movement (Idle; while the latter is composed by two levels: leg perturbed (Pert and leg not perturbed (NoPert. Specifically, the subjects' leg was periodically perturbed by an assistive robot in 5 out of 10 sessions of the experiment and not moved in the remaining sessions. The aim of this study is to verify that the decoding performance of the finger tapping task is comparable between the two conditions NoPert and Pert. Accordingly, a classifier is trained to output the class of the finger tapping, given as input the features associated with the ERD/ERS. Individually for each subject, the decoding performance is statistically compared between the the NoPert and Pert conditions. Results show that the decoding performance is notably above chance, for all the subjects, under both conditions. Moreover, the statistical comparison do not highlight a significant difference between NoPert and Pert in any subject, which is confirmed by feature visualisation.

  1. ERASED-CHASE DECODING FOR RS-CODED MPSK SIGNALING OVER A RAYLEIGH FADING CHANNEL

    Institute of Scientific and Technical Information of China (English)

    Xu Chaojun; Sun Yue; Wang Xinmei

    2007-01-01

    In this paper,a novel dual-metric,the maximum and minimum Squared Euclidean Distance Increment(SEDI)brought by changing the hard decision symbol,is introduced to measure the reliability of the received M-ary Phase Shift Keying(MPSK)symbols over a Rayleigh fading channel.Based on the dual-metric,a Chase-type soft decoding algorithm,which is called erased-Chase algorithm,is developed for Reed-Solomon(RS)coded MPSK schemes.The proposed algorithm treats the unreliable symbols with small maximum SEDI as erasures,and tests the non-erased unreliable symbols with small minimum SEDI as the Chase-2 algorithm does.By introducing optimality test into the decoding procedure,much more reduction in the decoding complexity can be achieved.Simulation results of the RS(63,42,22)-coded 8-PSK scheme over a Rayleigh fading channel show that the proposed algorithm provides a very efficient tradeoff between the decoding complexity and the error performance.Finally,an adaptive scheme for the number of erasures is introduced into the decoding algorithm.

  2. An Improved Motion JPEG2000 Decoder for Error Concealment of Segmentation Symbol Faults

    Directory of Open Access Journals (Sweden)

    Mahmoud Reza Hashemi

    2008-03-01

    Full Text Available The motion JPEG2000 (MJP2 video coding standard uses intraframe compression and provides practical features such as fast coding, easy editing, and error robustness. Error robustness and concealment are some of the crucial requirements of any decoder in error prone applications. In this paper, a new error concealment method for MJP2 decoder has been proposed. The proposed method exploits the remaining temporal redundancies within the MJP2 coded video sequences for error concealment. The realized MJP2 decoder is able to detect and remove the artifacts produced by highly damaged bit planes that were not detected by the segmentation symbol, one of the error resilience tools in MJP2. Simulation results indicate that the proposed decoder can effectively remove such artifacts in the decoded erroneous MJP2 bit streams and improve the PSNR of the corrupted frames by up to 6.8 dB. An average of 0.2 dB improvement has been observed over a relatively large number of test cases and various error rates.

  3. An Improved Motion JPEG2000 Decoder for Error Concealment of Segmentation Symbol Faults

    Directory of Open Access Journals (Sweden)

    Hashemi MahmoudReza

    2008-01-01

    Full Text Available The motion JPEG2000 (MJP2 video coding standard uses intraframe compression and provides practical features such as fast coding, easy editing, and error robustness. Error robustness and concealment are some of the crucial requirements of any decoder in error prone applications. In this paper, a new error concealment method for MJP2 decoder has been proposed. The proposed method exploits the remaining temporal redundancies within the MJP2 coded video sequences for error concealment. The realized MJP2 decoder is able to detect and remove the artifacts produced by highly damaged bit planes that were not detected by the segmentation symbol, one of the error resilience tools in MJP2. Simulation results indicate that the proposed decoder can effectively remove such artifacts in the decoded erroneous MJP2 bit streams and improve the PSNR of the corrupted frames by up to 6.8 dB. An average of 0.2 dB improvement has been observed over a relatively large number of test cases and various error rates.

  4. REDUCED-COMPLEXITY DECODING ALGORITHMS FOR UNITARY SPACE-TIME CODES

    Institute of Scientific and Technical Information of China (English)

    Su Xin; Yi Kechu; Tian Bin; Sun Yongjun

    2007-01-01

    Two reduced-complexity decoding algorithms for unitary space-time codes based on tree-structured constellation are presented. In this letter original unitary space-time constellation is divided into several groups. Each one is treated as the leaf nodes set of a subtree. Choosing the unitary signals that represent each group as the roots of these subtrees generates a tree-structured constellation.The proposed tree search decoder decides to which sub tree the receive signal belongs by searching in the set of subtree roots. The final decision is made after a local search in the leaf nodes set of the selected sub tree. The adjacent subtree joint decoder performs joint search in the selected sub tree and its "surrounding" subtrees, which improves the Bit Error Rate (BER) performance of purely tree search method. The exhaustively search in the whole constellation is avoided in our proposed decoding algorithms, a lower complexity is obtained compared to that of Maximum Likelihood (ML) decoding.Simulation results have also been provided to demonstrate the feasibility of these new methods.

  5. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  6. Parallel LDPC Decoding on GPUs Using a Stream-Based Computing Approach

    Institute of Scientific and Technical Information of China (English)

    Gabriel Falc(a)o; Shinichi Yamagiwa; Vitor Silva; Leonel Sousa

    2009-01-01

    Low-Density Parity-Check(LDPC)codes are powerful error correcting codes adopted by recent communication standards.LDPC decoders are based on belief propagation algorithms,which make use of a Tanner graph and very intensive message-passing computation,and usually require hardware-based dedicated solutions.With the exponential increase of the computational power of commodity graphics processing units(GPUs),new opportunities have arisen to develop general purpose processing on GPUs.This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders.A new stream-based approach is proposed,based on compact data structures to represent the Tanner graph.It is shown that such a challenging application for stream-based computing,because of irregular memory access patterns,memory bandwidth and recursive flow control constraints,can be efficiently implemented on GPUs.The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform,a generic interface tool for managing the kernels'execution regardless of the GPU manufacturer and operating system.Moreover,to relatively assess the obtained results,we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data(SIMD)Extensions.Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously,reducing the processing time by one order of magnitude.

  7. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  8. Modified Golden Codes for Improved Error Rates Through Low Complex Sphere Decoder

    Directory of Open Access Journals (Sweden)

    K.Thilagam

    2013-05-01

    Full Text Available n recent years, the golden codes have proven to ex hibit a superior performance in a wireless MIMO (Multiple Input Multiple Output scenario than any other code. However, a serious limitation associated with it is its increased deco ding complexity. This paper attempts to resolve this challenge through suitable modification of gol den code such that a less complex sphere decoder could be used without much compromising the error rates. In this paper, a minimum polynomial equation is introduced to obtain a reduc ed golden ratio (RGR number for golden code which demands only for a low complexity decodi ng procedure. One of the attractive approaches used in this paper is that the effective channel matrix has been exploited to perform a single symbol wise decoding instead of grouped sy mbols using a sphere decoder with tree search algorithm. It has been observed that the low decoding complexity of O (q 1.5 is obtained against conventional method of O (q 2.5 . Simulation analysis envisages that in addition t o reduced decoding, improved error rates is also obta ined.

  9. Analysis of error floor of LDPC codes under LP decoding over the BSC

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ

    2009-01-01

    We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.

  10. Word-decoding as a function of temporal processing in the visual system.

    Science.gov (United States)

    Holloway, Steven R; Náñez, José E; Seitz, Aaron R

    2013-01-01

    This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm) test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.

  11. Pairwise Check Decoding for LDPC Coded Two-Way Relay Block Fading Channels

    CERN Document Server

    Liu, Jianquan; Xu, Youyun

    2011-01-01

    Partial decoding is known to have the potential to achieve a larger rate region than that of full decoding in two-way relay (TWR) channels. Existing partial decoding realizations are however designed for Gaussian channels and with a static physical layer network coding (PLNC) mapping. In this paper, we propose a new channel coding solution at the relay, called \\emph{pairwise check decoding} (PCD), for low-density parity-check (LDPC) coded TWR system over block fading channels. The main idea is to form a check relationship table (check-relation-tab) for the superimposed LDPC coded packet pair in the multiple access (MA) phase in conjunction with an adaptive PLNC mapping in the broadcast (BC) phase. Using PCD, we then present a partial decoding method, two-stage closest-neighbor clustering with PCD (TS-CNC-PCD), with the aim of minimizing the worst pairwise error performance. Moreover, a kind of correlative rows optimization, named as the minimum correlation optimization (MCO), is proposed for selecting the bet...

  12. Improving brain-machine interface performance by decoding intended future movements

    Science.gov (United States)

    Willett, Francis R.; Suminski, Aaron J.; Fagg, Andrew H.; Hatsopoulos, Nicholas G.

    2013-04-01

    Objective. A brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future. Approach. We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen. Main Results. Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads. Significance. This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance.

  13. Age-Related Response Bias in the Decoding of Sad Facial Expressions

    Directory of Open Access Journals (Sweden)

    Mara Fölster

    2015-10-01

    Full Text Available Recent studies have found that age is negatively associated with the accuracy of decoding emotional facial expressions; this effect of age was found for actors as well as for raters. Given that motivational differences and stereotypes may bias the attribution of emotion, the aim of the present study was to explore whether these age effects are due to response bias, that is, the unbalanced use of response categories. Thirty younger raters (19–30 years and thirty older raters (65–81 years viewed video clips of younger and older actors representing the same age ranges, and decoded their facial expressions. We computed both raw hit rates and bias-corrected hit rates to assess the influence of potential age-related response bias on decoding accuracy. Whereas raw hit rates indicated significant effects of both the actors’ and the raters’ ages on decoding accuracy for sadness, these age effects were no longer significant when response bias was corrected. Our results suggest that age effects on the accuracy of decoding facial expressions may be due, at least in part, to age-related response bias.

  14. Word-decoding as a function of temporal processing in the visual system.

    Directory of Open Access Journals (Sweden)

    Steven R Holloway

    Full Text Available This study explored the relation between visual processing and word-decoding ability in a normal reading population. Forty participants were recruited at Arizona State University. Flicker fusion thresholds were assessed with an optical chopper using the method of limits by a 1-deg diameter green (543 nm test field. Word decoding was measured using reading-word and nonsense-word decoding tests. A non-linguistic decoding measure was obtained using a computer program that consisted of Landolt C targets randomly presented in four cardinal orientations, at 3-radial distances from a focus point, for eight compass points, in a circular pattern. Participants responded by pressing the arrow key on the keyboard that matched the direction the target was facing. The results show a strong correlation between critical flicker fusion thresholds and scores on the reading-word, nonsense-word, and non-linguistic decoding measures. The data suggests that the functional elements of the visual system involved with temporal modulation and spatial processing may affect the ease with which people read.

  15. FAULT TOLERANT NANO-MEMORY WITH FAULT SECURE ENCODER AND DECODER

    Directory of Open Access Journals (Sweden)

    VIJAYKUMAR.K,

    2011-01-01

    Full Text Available Traditionally, memory cells were the only circuitry susceptible to transient faults The supporting circuitries around the memory were assumed to be fault-free. Due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must be protected. Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. In this paper a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of faultsecure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the faultsecure detector capability.

  16. Word and Person Effects on Decoding Accuracy: A New Look at an Old Question

    Science.gov (United States)

    Gilbert, Jennifer K.; Compton, Donald L.; Kearns, Devin M.

    2011-01-01

    The purpose of this study was to extend the literature on decoding by bringing together two lines of research, namely person and word factors that affect decoding, using a crossed random-effects model. The sample was comprised of 196 English-speaking grade 1 students. A researcher-developed pseudoword list was used as the primary outcome measure. Because grapheme-phoneme correspondence (GPC) knowledge was treated as person and word specific, we are able to conclude that it is neither necessary nor sufficient for a student to know all GPCs in a word before accurately decoding the word. And controlling for word-specific GPC knowledge, students with lower phonemic awareness and slower rapid naming skill have lower predicted probabilities of correct decoding than counterparts with superior skills. By assessing a person-by-word interaction, we found that students with lower phonemic awareness have more difficulty applying knowledge of complex vowel graphemes compared to complex consonant graphemes when decoding unfamiliar words. Implications of the methodology and results are discussed in light of future research. PMID:21743750

  17. Provably Efficient Instanton Search Algorithm for LP decoding of LDPC codes over the BSC

    CERN Document Server

    Chilappagari, Shashi Kiran

    2008-01-01

    We consider Linear Programming (LP) decoding of a Low-Density Parity-Check (LDPC) code performing over the Binary Symmetric Channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. Following the approach of [1], we design an efficient algorithm termed the Instanton Search Algorithm (ISA) which, given a random input, generates a set of flips, called BSC-instanton, such that the LP decoder decodes the instanton into a pseudo-codeword distinct from the all-zero-codeword while any reduction of the instanton leads to the all-zero-codeword. We prove that (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any original input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of actual flips in the input. Repeated sufficient number of times, the ISA outcomes the number of unique instantons of different sizes. We illustrate the performance of the algorithm on the [155,64,20] Tanner code and sho...

  18. Decoding Concrete and Abstract Action Representations During Explicit and Implicit Conceptual Processing.

    Science.gov (United States)

    Wurm, Moritz F; Ariani, Giacomo; Greenlee, Mark W; Lingnau, Angelika

    2016-08-01

    Action understanding requires a many-to-one mapping of perceived input onto abstract representations that generalize across concrete features. It is debated whether such abstract action concepts are encoded in ventral premotor cortex (PMv; motor hypothesis) or, alternatively, are represented in lateral occipitotemporal cortex (LOTC; cognitive hypothesis). We used fMRI-based multivoxel pattern analysis to decode observed actions at concrete and abstract, object-independent levels of representation. Participants observed videos of 2 actions involving 2 different objects, using either an explicit or implicit task with respect to conceptual action processing. We decoded concrete action representations by training and testing a classifier to discriminate between actions within each object category. To identify abstract action representations, we trained the classifier to discriminate actions in one object and tested the classifier on actions performed on the other object, and vice versa. Region-of-interest and searchlight analyses revealed decoding in LOTC at both concrete and abstract levels during both tasks, whereas decoding in PMv was restricted to the concrete level during the explicit task. In right inferior parietal cortex, decoding was significant for the abstract level during the explicit task. Our findings are incompatible with the motor hypothesis, but support the cognitive hypothesis of action understanding.

  19. Four-Dimensional Coded Modulation with Bit-wise Decoders for Future Optical Communications

    CERN Document Server

    Alvarado, Alex

    2014-01-01

    Coded modulation (CM) is the combination of forward error correction (FEC) and multilevel constellations. Coherent optical communication systems result in a four-dimensional (4D) signal space, which naturally leads to 4D-CM transceivers. A practically attractive design paradigm is to use a bit-wise decoder, where the detection process is (suboptimally) separated into two steps: soft-decision demapping followed by binary decoding. In this paper, bit-wise decoders are studied from an information-theoretic viewpoint. 4D constellations with up to 4096 constellation points are considered. Metrics to predict the post-FEC bit-error rate (BER) of bit-wise decoders are analyzed. The mutual information is shown to fail at predicting the post-FEC BER of bit-wise decoders and the so-called generalized mutual information is shown to be a much more robust metric. It is also shown that constellations that transmit and receive information in each polarization and quadrature independently (e.g., PM-QPSK, PM-16QAM, and PM-64QA...

  20. Delay Reduction for Instantly Decodable Network Coding in Persistent Channels With Feedback Imperfections

    KAUST Repository

    Douik, Ahmed S.

    2015-11-05

    This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedback imperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission and the feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weight clique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel and feedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedy algorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods in all situations and to achieve significant improvement, particularly for high time-correlated channels.