WorldWideScience

Sample records for binary message-passing decoders

  1. Analysis and Design of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2012-01-01

    Binary message-passing decoders for low-density parity-check (LDPC) codes are studied by using extrinsic information transfer (EXIT) charts. The channel delivers hard or soft decisions and the variable node decoder performs all computations in the L-value domain. A hard decision channel results...... message-passing decoders. Finally, it is shown that errors on cycles consisting only of degree two and three variable nodes cannot be corrected and a necessary and sufficient condition for the existence of a cycle-free subgraph is derived....

  2. EXIT Chart Analysis of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2007-01-01

    Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...... be extended to channels with larger output alphabets. By increasing the output alphabet from hard decisions to four symbols, a gain of more than 1.0 dB is achieved using optimized codes. For this code optimization, the mixing property of EXIT functions has to be modified to the case of binary message-passing...

  3. A real-time MPEG software decoder using a portable message-passing library

    Energy Technology Data Exchange (ETDEWEB)

    Kwong, Man Kam; Tang, P.T. Peter; Lin, Biquan

    1995-12-31

    We present a real-time MPEG software decoder that uses message-passing libraries such as MPL, p4 and MPI. The parallel MPEG decoder currently runs on the IBM SP system but can be easil ported to other parallel machines. This paper discusses our parallel MPEG decoding algorithm as well as the parallel programming environment under which it uses. Several technical issues are discussed, including balancing of decoding speed, memory limitation, 1/0 capacities, and optimization of MPEG decoding components. This project shows that a real-time portable software MPEG decoder is feasible in a general-purpose parallel machine.

  4. Message-Passing Algorithms for Channel Estimation and Decoding Using Approximate Inference

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro

    2012-01-01

    We design iterative receiver schemes for a generic communication system by treating channel estimation and information decoding as an inference problem in graphical models. We introduce a recently proposed inference framework that combines belief propagation (BP) and the mean field (MF) approxima...

  5. Entropy Message Passing Algorithm

    CERN Document Server

    Ilic, Velimir M; Branimir, Todorovic T

    2009-01-01

    Message passing over factor graph can be considered as generalization of many well known algorithms for efficient marginalization of multivariate function. A specific instance of the algorithm is obtained by choosing an appropriate commutative semiring for the range of the function to be marginalized. Some examples are Viterbi algorithm, obtained on max-product semiring and forward-backward algorithm obtained on sum-product semiring. In this paper, Entropy Message Passing algorithm (EMP) is developed. It operates over entropy semiring, previously introduced in automata theory. It is shown how EMP extends the use of message passing over factor graphs to probabilistic model algorithms such as Expectation Maximization algorithm, gradient methods and computation of model entropy, unifying the work of different authors.

  6. Message passing for quantified Boolean formulas

    CERN Document Server

    Zhang, Pan; Zdeborová, Lenka; Zecchina, Riccardo

    2012-01-01

    We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis-Putnam Logemann-Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics gives robust exponential efficiency gain with respect to the state-of-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this our study sheds light on using message passing in small systems and as subroutines in complete solvers.

  7. Message Passing Framework for Globally Interconnected Clusters

    Science.gov (United States)

    Hafeez, M.; Asghar, S.; Malik, U. A.; Rehman, A.; Riaz, N.

    2011-12-01

    In prevailing technology trends it is apparent that the network requirements and technologies will advance in future. Therefore the need of High Performance Computing (HPC) based implementation for interconnecting clusters is comprehensible for scalability of clusters. Grid computing provides global infrastructure of interconnecting clusters consisting of dispersed computing resources over Internet. On the other hand the leading model for HPC programming is Message Passing Interface (MPI). As compared to Grid computing, MPI is better suited for solving most of the complex computational problems. MPI itself is restricted to a single cluster. It does not support message passing over the internet to use the computing resources of different clusters in an optimal way. We propose a model that provides message passing capabilities between parallel applications over the internet. The proposed model is based on Architecture for Java Universal Message Passing (A-JUMP) framework and Enterprise Service Bus (ESB) named as High Performance Computing Bus. The HPC Bus is built using ActiveMQ. HPC Bus is responsible for communication and message passing in an asynchronous manner. Asynchronous mode of communication offers an assurance for message delivery as well as a fault tolerance mechanism for message passing. The idea presented in this paper effectively utilizes wide-area intercluster networks. It also provides scheduling, dynamic resource discovery and allocation, and sub-clustering of resources for different jobs. Performance analysis and comparison study of the proposed framework with P2P-MPI are also presented in this paper.

  8. Multilevel Decoders Surpassing Belief Propagation on the Binary Symmetric Channel

    CERN Document Server

    Planjery, Shiva Kumar; Chilappagari, Shashi Kiran; Vasić, Bane

    2010-01-01

    In this paper, we propose a new class of quantized message-passing decoders for LDPC codes over the BSC. The messages take values (or levels) from a finite set. The update rules do not mimic belief propagation but instead are derived using the knowledge of trapping sets. We show that the update rules can be derived to correct certain error patterns that are uncorrectable by algorithms such as BP and min-sum. In some cases even with a small message set, these decoders can guarantee correction of a higher number of errors than BP and min-sum. We provide particularly good 3-bit decoders for 3-left-regular LDPC codes. They significantly outperform the BP and min-sum decoders, but more importantly, they achieve this at only a fraction of the complexity of the BP and min-sum decoders.

  9. List-Message Passing Achieves Capacity on the q-ary Symmetric Channel for Large q

    CERN Document Server

    Zhang, Fan

    2008-01-01

    We discuss and analyze a list-message-passing decoder with verification for low-density parity-check (LDPC) codes on the q-ary symmetric channel (q-SC). Rather than passing messages consisting of symbol probabilities, we pass lists of possible symbols and mark very likely symbols as verified. The density evolution (DE) equations for this decoder are derived and used to compute decoding thresholds. If the maximum list-size is unbounded, then we find that any capacity-achieving LDPC code for the binary erasure channel can be used to achieve capacity on the q-SC for large q. The decoding thresholds are also computed via DE for the case where each list is truncated to satisfy a maximum list-size constraint. We observe that one of the algorithms proposed in [7] is analyzed incorrectly and derive the correct analysis. The probability of false verification (FV) is also considered and techniques are discussed to mitigate the FV. Optimization of the degree distribution is also used to improve the threshold for a fixed...

  10. MAP Estimation, Message Passing, and Perfect Graphs

    CERN Document Server

    Jebara, Tony S

    2012-01-01

    Efficiently finding the maximum a posteriori (MAP) configuration of a graphical model is an important problem which is often implemented using message passing algorithms. The optimality of such algorithms is only well established for singly-connected graphs and other limited settings. This article extends the set of graphs where MAP estimation is in P and where message passing recovers the exact solution to so-called perfect graphs. This result leverages recent progress in defining perfect graphs (the strong perfect graph theorem), linear programming relaxations of MAP estimation and recent convergent message passing schemes. The article converts graphical models into nand Markov random fields which are straightforward to relax into linear programs. Therein, integrality can be established in general by testing for graph perfection. This perfection test is performed efficiently using a polynomial time algorithm. Alternatively, known decomposition tools from perfect graph theory may be used to prove perfection ...

  11. Clandestine Message Passing in Virtual Environments

    Science.gov (United States)

    2008-09-01

    Steganography Hiding messages in pictures, audio , packets Place messages in clothing textures, object files Bots Automate the logic and control of...information will be developed along with proof of concepts. Visual cues, steganography and autonomous bots will be examined. Monitoring techniques are...14. SUBJECT TERMS Message Passing, Virtual Environments, Steganography , Second Life, Internet Terrorism, Honeyworld, Sun MPK20, Clandestine Messages

  12. Polymorphic Endpoint Types for Copyless Message Passing

    Directory of Open Access Journals (Sweden)

    Viviana Bono

    2011-07-01

    Full Text Available We present PolySing#, a calculus that models process interaction based on copyless message passing, in the style of Singularity OS. We equip the calculus with a type system that accommodates polymorphic endpoint types, which are a variant of polymorphic session types, and we show that well-typed processes are free from faults, leaks, and communication errors. The type system is essentially linear, although linearity alone may leave room for scenarios where well-typed processes leak memory. We identify a condition on endpoint types that prevents these leaks from occurring.

  13. Compressive Imaging via Approximate Message Passing

    Science.gov (United States)

    2015-09-04

    20] uses an adaptive Wiener filter [21] for 2D denoising. Another option is to use a more sophisticated image 2D denoiser such as BM3D [22] within AMP... filtering ,” IEEE Trans. Image Process ., vol. 16, no. 8, pp. 2080–2095, Aug. 2007. [23] J. Tan, Y. Ma, H. Rueda, D. Baron, and G. Arce, “Application of...JOURNAL OF SELECTED TOPICS IN in Signal Processing , (06 2015): 1. doi: Jin Tan, Yanting Ma, Dror Baron. Compressive Imaging via Approximate MessagePassing

  14. Linear-programming Decoding of Non-binary Linear Codes

    CERN Document Server

    Flanagan, Mark F; Byrne, Eimear; Greferath, Marcus

    2007-01-01

    We develop a framework for linear-programming (LP) decoding of non-binary linear codes over rings. We prove that the resulting LP decoder has the `maximum likelihood certificate' property, and we show that the decoder output is the lowest cost pseudocodeword. Equivalence between pseudocodewords of the linear program and pseudocodewords of graph covers is proved. LP decoding performance is illustrated for the (11,6,5) ternary Golay code with ternary PSK modulation over AWGN, and in this case it is shown that the LP decoder performance is comparable to codeword-error-rate-optimum hard-decision based decoding.

  15. Message Passing for Dynamic Network Energy Management

    CERN Document Server

    Kraning, Matt; Lavaei, Javad; Boyd, Stephen

    2012-01-01

    We consider a network of devices, such as generators, fixed loads, deferrable loads, and storage devices, each with its own dynamic constraints and objective, connected by lossy capacitated lines. The problem is to minimize the total network objective subject to the device and line constraints, over a given time horizon. This is a large optimization problem, with variables for consumption or generation in each time period for each device. In this paper we develop a decentralized method for solving this problem. The method is iterative: At each step, each device exchanges simple messages with its neighbors in the network and then solves its own optimization problem, minimizing its own objective function, augmented by a term determined by the messages it has received. We show that this message passing method converges to a solution when the device objective and constraints are convex. The method is completely decentralized, and needs no global coordination other than synchronizing iterations; the problems to be...

  16. Message-Passing Estimation from Quantized Samples

    CERN Document Server

    Kamilov, Ulugbek; Rangan, Sundeep

    2011-01-01

    Estimation of a vector from quantized linear measurements is a common problem for which simple linear techniques are suboptimal -- sometimes greatly so. This paper develops generalized approximate message passing (GAMP) algorithms for minimum mean-squared error estimation of a random vector from quantized linear measurements, notably allowing the linear expansion to be overcomplete or undercomplete and the scalar quantization to be regular or non-regular. GAMP is a recently-developed class of algorithms that uses Gaussian approximations in belief propagation and allows arbitrary separable input and output channels. Scalar quantization of measurements is incorporated into the output channel formalism, leading to the first tractable and effective method for high-dimensional estimation problems involving non-regular scalar quantization. Non-regular quantization is empirically demonstrated to greatly improve rate--distortion performance in some problems with oversampling or with undersampling combined with a spar...

  17. Approximate message passing with restricted Boltzmann machine priors

    Science.gov (United States)

    Tramel, Eric W.; Drémeau, Angélique; Krzakala, Florent

    2016-07-01

    Approximate message passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problems. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernoulli prior which utilizes a restricted Boltzmann machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple i.i.d. priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.

  18. Approximate Message Passing with Restricted Boltzmann Machine Priors

    CERN Document Server

    Tramel, Eric W; Krzakala, Florent

    2015-01-01

    Approximate Message Passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problem. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a Restricted Boltzmann Machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple iid priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.

  19. Min-Max decoding for non binary LDPC codes

    CERN Document Server

    Savin, Valentin

    2008-01-01

    Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the Min-Sum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field's cardinality and a reduced complexity implementation called selective implementation, which makes the Min-Max decoding very attractive for practical purposes.

  20. Direct Deposit -- When Message Passing Meets Shared Memory

    Science.gov (United States)

    2000-05-19

    by H. Karl in [64]. The paper implements the pure DSM code, the pure message passing code and a few intermediate forms on the Charlotte DSM system [8...ASPLOS VI), pages 51–60, San Jose, October 1994. ACM. [64] H. Karl . Bridging the gap between distributed shared memory and message passing. Concurrency...pages 94 – 101, 1988. [73] P.N. Loewenstein and D.L. Dill. Verification of a multiprocessor cache protocol using simulation relations and higher-order

  1. Data Handover: Reconciling Message Passing and Shared Memory

    OpenAIRE

    Gustedt, Jens

    2004-01-01

    Data Handover (DHO) is a programming paradigm and interface that aims to handle data between parallel or distributed processes that mixes aspects of message passing and shared memory. It is designed to overcome the potential problems in terms of efficiency of both: (1) memory blowup and forced copies for message passing and (2) data consistency and latency problems for shared memory. Our approach attempts to be simple and easy to understand. It content...

  2. FPGA implementation of low complexity LDPC iterative decoder

    Science.gov (United States)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  3. Future-based Static Analysis of Message Passing Programs

    Directory of Open Access Journals (Sweden)

    Wytse Oortwijn

    2016-06-01

    Full Text Available Message passing is widely used in industry to develop programs consisting of several distributed communicating components. Developing functionally correct message passing software is very challenging due to the concurrent nature of message exchanges. Nonetheless, many safety-critical applications rely on the message passing paradigm, including air traffic control systems and emergency services, which makes proving their correctness crucial. We focus on the modular verification of MPI programs by statically verifying concrete Java code. We use separation logic to reason about local correctness and define abstractions of the communication protocol in the process algebra used by mCRL2. We call these abstractions futures as they predict how components will interact during program execution. We establish a provable link between futures and program code and analyse the abstract futures via model checking to prove global correctness. Finally, we verify a leader election protocol to demonstrate our approach.

  4. Message-Passing Interface - Selected Topics and Best Practices

    OpenAIRE

    Janetzko, Florian

    2013-01-01

    The Message-Passing Interface (MPI) is a widely-used standard library for programming parallel applications using the distributed-memory model. In the first part of this talk we will give a short overview over concepts and properties of modern HPC hardware architectures as well as basic programming concepts. A brief introduction into a design strategy for parallel algorithms is presented.The focus of the second part will be on the Message-Passing interface. After a short overview over general...

  5. Message Passing on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Puffitsch, Wolfgang; Schoeberl, Martin

    2015-01-01

    Real-time systems need time-predictable computing platforms. For a multicore processor to be time-predictable, communication between processor cores needs to be time-predictable as well. This paper presents a time-predictable message-passing library for such a platform. We show how to build up...

  6. Protocol-Based Verification of Message-Passing Parallel Programs

    DEFF Research Database (Denmark)

    López-Acosta, Hugo-Andrés; Eduardo R. B. Marques, Eduardo R. B.; Martins, Francisco;

    2015-01-01

    a protocol language based on a dependent type system for message-passing parallel programs, which includes various communication operators, such as point-to-point messages, broadcast, reduce, array scatter and gather. For the verification of a program against a given protocol, the protocol is first...

  7. Binary Linear-Time Erasure Decoding for Non-Binary LDPC codes

    CERN Document Server

    Savin, Valentin

    2009-01-01

    In this paper, we first introduce the extended binary representation of non-binary codes, which corresponds to a covering graph of the bipartite graph associated with the non-binary code. Then we show that non-binary codewords correspond to binary codewords of the extended representation that further satisfy some simplex-constraint: that is, bits lying over the same symbol-node of the non-binary graph must form a codeword of a simplex code. Applied to the binary erasure channel, this description leads to a binary erasure decoding algorithm of non-binary LDPC codes, whose complexity depends linearly on the cardinality of the alphabet. We also give insights into the structure of stopping sets for non-binary LDPC codes, and discuss several aspects related to upper-layer FEC applications.

  8. Polytope Representations for Linear-Programming Decoding of Non-Binary Linear Codes

    CERN Document Server

    Skachek, Vitaly; Byrne, Eimear; Greferath, Marcus

    2007-01-01

    In previous work, we demonstrated how decoding of a non-binary linear code could be formulated as a linear-programming problem. In this paper, we study different polytopes for use with linear-programming decoding, and show that for many classes of codes these polytopes yield a complexity advantage for decoding. These representations lead to polynomial-time decoders for a wide variety of classical non-binary linear codes.

  9. Denoising Message Passing for X-ray Computed Tomography Reconstruction

    CERN Document Server

    Perelli, Alessandro; Can, Ali; Davies, Mike E

    2016-01-01

    X-ray Computed Tomography (CT) reconstruction from sparse number of views is becoming a powerful way to reduce either the radiation dose or the acquisition time in CT systems but still requires a huge computational time. This paper introduces an approximate Bayesian inference framework for CT reconstruction based on a family of denoising approximate message passing (DCT-AMP) algorithms able to improve both the convergence speed and the reconstruction quality. Approximate Message Passing for Compressed Sensing has been extensively analysed for random linear measurements but there are still not clear solutions on how AMP should be modified and how it performs with real world problems. In particular to overcome the convergence issues of DCT-AMP with structured measurement matrices, we propose a disjoint preconditioned version of the algorithm tailored for both the geometric system model and the noise model. In addition the Bayesian DCT-AMP formulation allows to measure how the current estimate is close to the pr...

  10. Compressive Imaging via Approximate Message Passing with Image Denoising

    OpenAIRE

    Tan, Jin; Ma, Yanting; Baron, Dror

    2014-01-01

    We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reco...

  11. Feedback Message Passing for Inference in Gaussian Graphical Models

    CERN Document Server

    Liu, Ying; Anandkumar, Animashree; Willsky, Alan S

    2011-01-01

    While loopy belief propagation (LBP) performs reasonably well for inference in some Gaussian graphical models with cycles, its performance is unsatisfactory for many others. In particular for some models LBP does not converge, and in general when it does converge, the computed variances are incorrect (except for cycle-free graphs for which belief propagation (BP) is non-iterative and exact). In this paper we propose {\\em feedback message passing} (FMP), a message-passing algorithm that makes use of a special set of vertices (called a {\\em feedback vertex set} or {\\em FVS}) whose removal results in a cycle-free graph. In FMP, standard BP is employed several times on the cycle-free subgraph excluding the FVS while a special message-passing scheme is used for the nodes in the FVS. The computational complexity of exact inference is $O(k^2n)$, where $k$ is the number of feedback nodes, and $n$ is the total number of nodes. When the size of the FVS is very large, FMP is intractable. Hence we propose {\\em approximat...

  12. An Improved Three-Weight Message-Passing Algorithm

    CERN Document Server

    Derbinsky, Nate; Elser, Veit; Yedidia, Jonathan S

    2013-01-01

    We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC. The "certain" messages allow our improved algorithm to implement constraint propagation as a special case, while the "no opinion" messages speed convergence for some problems by making the algorithm focus only on active constraints. We describe how our three-weight version of ADMM/DC can give greatly improved performance for non-convex problems such as circle packing and solving large Sudoku puzzles, while retaining the exact performance of ADMM for convex problems. We also describe the advantages of our algorithm compared to other message-passing algorithms based u...

  13. Efficient Network for Non-Binary QC-LDPC Decoder

    CERN Document Server

    Zhang, Chuan

    2011-01-01

    This paper presents approaches to develop efficient network for non-binary quasi-cyclic LDPC (QC-LDPC) decoders. By exploiting the intrinsic shifting and symmetry properties of the check matrices, significant reduction of memory size and routing complexity can be achieved. Two different efficient network architectures for Class-I and Class-II non-binary QC-LDPC decoders have been proposed, respectively. Comparison results have shown that for the code of the 64-ary (1260, 630) rate-0.5 Class-I code, the proposed scheme can save more than 70.6% hardware required by shuffle network than the state-of-the-art designs. The proposed decoder example for the 32-ary (992, 496) rate-0.5 Class-II code can achieve a 93.8% shuffle network reduction compared with the conventional ones. Meanwhile, based on the similarity of Class-I and Class-II codes, similar shuffle network is further developed to incorporate both classes of codes at a very low cost.

  14. Message Passing Algorithms for Compressed Sensing: I. Motivation and Construction

    CERN Document Server

    Donoho, David L; Montanari, Andrea

    2009-01-01

    In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements \\cite{DMM}. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.

  15. Fault-tolerant Agreement in Synchronous Message-passing Systems

    CERN Document Server

    Raynal, Michel

    2010-01-01

    The present book focuses on the way to cope with the uncertainty created by process failures (crash, omission failures and Byzantine behavior) in synchronous message-passing systems (i.e., systems whose progress is governed by the passage of time). To that end, the book considers fundamental problems that distributed synchronous processes have to solve. These fundamental problems concern agreement among processes (if processes are unable to agree in one way or another in presence of failures, no non-trivial problem can be solved). They are consensus, interactive consistency, k-set agreement an

  16. Context adaptive binary arithmetic decoding on transport triggered architectures

    Science.gov (United States)

    Rouvinen, Joona; Jääskeläinen, Pekka; Rintaluoma, Tero; Silvén, Olli; Takala, Jarmo

    2008-02-01

    Video coding standards, such as MPEG-4, H.264, and VC1, define hybrid transform based block motion compensated techniques that employ almost the same coding tools. This observation has been a foundation for defining the MPEG Reconfigurable Multimedia Coding framework that targets to facilitate multi-format codec design. The idea is to send a description of the codec with the bit stream, and to reconfigure the coding tools accordingly on-the-fly. This kind of approach favors software solutions, and is a substantial challenge for the implementers of mobile multimedia devices that aim at high energy efficiency. In particularly as high definition formats are about to be required from mobile multimedia devices, variable length decoders are becoming a serious bottleneck. Even at current moderate mobile video bitrates software based variable length decoders swallow a major portion of the resources of a mobile processor. In this paper we present a Transport Triggered Architecture (TTA) based programmable implementation for Context Adaptive Binary Arithmetic de-Coding (CABAC) that is used e.g. in the main profile of H.264 and in JPEG2000. The solution can be used even for other variable length codes.

  17. IMP: A Message-Passing Algorithmfor Matrix Completion

    CERN Document Server

    Kim, Byung-Hak; Pfister, Henry D

    2010-01-01

    A new message-passing (MP) method is considered for the matrix completion problem associated with recommender systems. We attack the problem using a (generative) factor graph model that is related to a probabilistic low-rank matrix factorization. Based on the model, we propose a new algorithm, termed IMP, for the recovery of a data matrix from incomplete observations. The algorithm is based on a clustering followed by inference via MP (IMP). The algorithm is compared with a number of other matrix completion algorithms on real collaborative filtering (e.g., Netflix) data matrices. Our results show that, while many methods perform similarly with a large number of revealed entries, the IMP algorithm outperforms all others when the fraction of observed entries is small. This is helpful because it reduces the well-known cold-start problem associated with collaborative filtering (CF) systems in practice.

  18. MPI-2: Extending the Message-Passing Interface

    Energy Technology Data Exchange (ETDEWEB)

    Geist, A. [Oak Ridge National Lab., TN (United States); Gropp, W.; Lusk, E. [Argonne National Lab., IL (United States); Huss-Lederman, S. [Argonne National Lab., IL (United States)]|[Wisconsin Univ., Madison, WI (United States); Lumsdaine, A. [Notre Dame Univ., IN (United States); Saphir, W. [NAS (United States); Skjellum, T. [Mississippi State Univ., MS (United States); Snir, M. [IBM Corp. (United States)

    1996-10-01

    This paper describes current activities of the MPI-2 Forum. The MPI - 2 Forum is a group of parallel computer vendors, library writers, and application specialists working together to define a set of extensions to MPI (Message Passing Interface). MPI was defined by the same process and now has many implementations, both vendor- proprietary and publicly available, for a wide variety of parallel computing environments. In this paper we present the salient aspects of the evolving MPI-2 document as it now stands. We discuss proposed extensions and enhancements to MPI in the areas of dynamic process management, one-sided operations, collective operations, new language binding, real-time computing, external interfaces, and miscellaneous topics.

  19. Incremental learning by message passing in hierarchical temporal memory.

    Science.gov (United States)

    Rehn, Erik M; Maltoni, Davide

    2014-08-01

    Hierarchical temporal memory (HTM) is a biologically inspired framework that can be used to learn invariant representations of patterns in a wide range of applications. Classical HTM learning is mainly unsupervised, and once training is completed, the network structure is frozen, thus making further training (i.e., incremental learning) quite critical. In this letter, we develop a novel technique for HTM (incremental) supervised learning based on gradient descent error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on belief propagation. Our experimental results demonstrate that a two-stage training approach composed of unsupervised pretraining and supervised refinement is very effective (both accurate and efficient). This is in line with recent findings on other deep architectures.

  20. Parallelization of a hydrological model using the message passing interface

    Science.gov (United States)

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  1. A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code

    Directory of Open Access Journals (Sweden)

    TARIQ SHAH

    2013-09-01

    Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.

  2. Intel NX to PVM 3.2 message passing conversion library

    Science.gov (United States)

    Arthur, Trey; Nelson, Michael L.

    1993-01-01

    NASA Langley Research Center has developed a library that allows Intel NX message passing codes to be executed under the more popular and widely supported Parallel Virtual Machine (PVM) message passing library. PVM was developed at Oak Ridge National Labs and has become the defacto standard for message passing. This library will allow the many programs that were developed on the Intel iPSC/860 or Intel Paragon in a Single Program Multiple Data (SPMD) design to be ported to the numerous architectures that PVM (version 3.2) supports. Also, the library adds global operations capability to PVM. A familiarity with Intel NX and PVM message passing is assumed.

  3. Message passing performance of Intel Paragon, IBM SP1 and Cray T3D using PVM

    Energy Technology Data Exchange (ETDEWEB)

    Manke, J.W.; Patterson, J.C. [Boeing Computer Services, Seattle, WA (United States)

    1995-12-01

    For distributed applications on MPP machinse, message passing performance is a critical factor in the overall speed and scalability of the application. This is particularly true when all-to-all communication between nodes is required. In this study we measured the message passing performance of our PVM implementation of an all-to-all communication method called recursive doubling. The measurements were done on an Intel Paragon, IBM SP1 and CRAY T3D. Using a model of message passing times for recursive doubling, we developed several measures that can be used to compare the message passing performance of MPP machines.

  4. Containing epidemic outbreaks by message-passing techniques

    CERN Document Server

    Altarelli, F; Dall'Asta, L; Wakeling, J R; Zecchina, R

    2013-01-01

    The problem of targeted network immunization can be defined as the one of finding a subset of nodes in a network to immunize or vaccinate in order to minimize a tradeoff between the cost of vaccination and the final (stationary) expected infection under a given epidemic model. Although computing the expected infection is a hard computational problem, simple and efficient mean-field approximations have been put forward in the literature in recent years. The optimization problem can be recast into a constrained one in which the constraints enforce local mean-field equations describing the average stationary state of the epidemic process. For a wide class of epidemic models, including the susceptible-infected-removed and the susceptible-infected-susceptible models, we define a message-passing approach to network immunization that allows us to study the statistical properties of epidemic outbreaks in the presence of immunized nodes as well as to find (nearly) optimal immunization sets for a given choice of parame...

  5. A message passing approach for general epidemic models

    CERN Document Server

    Karrer, Brian

    2010-01-01

    In most models of the spread of disease over contact networks it is assumed that the probabilities of disease transmission and recovery from disease are constant in time. In real life, however, this is far from true. In many diseases, for instance, recovery occurs at about the same time after infection for all individuals, rather than at a constant rate. In this paper, we study a generalized version of the SIR (susceptible-infected-recovered) model of epidemic disease that allows for arbitrary nonuniform distributions of transmission and recovery times. Standard differential equation approaches cannot be used for this generalized model, but we show that the problem can be reformulated as a time-dependent message passing calculation on the appropriate contact network. The calculation is exact on trees (i.e., loopless networks) or locally tree-like networks (such as random graphs) in the large system size limit. On non-tree-like networks we show that the calculation gives a rigorous bound on the size of disease...

  6. Characterizing Computation-Communication Overlap in Message-Passing Systems

    Energy Technology Data Exchange (ETDEWEB)

    David E. Bernholdt; Jarek Nieplocha; P. Sadayappan; Aniruddha G. Shet; Vinod Tipparaju

    2008-01-31

    Effective overlap of computation and communication is a well understood technique for latency hiding and can yield significant performance gains for applications on high-end computers. In this report, we describe an instrumentation framework developed for message-passing systems to characterize the degree of overlap of communication with computation in the execution of parallel applications. The inability to obtain precise time-stamps for pertinent communication events is a significant problem, and is addressed by generation of minimum and maximum bounds on achieved overlap. The overlap measures can aid application developers and system designers in investigating scalability issues. The approach has been used to instrument two MPI implementations as well as the ARMCI system. The implementation resides entirely within the communication library and thus integrates well with existing approaches that operate outside the library. The utility of the framework is demonstrated by analyzing communication-computation overlap for micro-benchmarks and the NAS benchmarks, and the insights obtained are used to modify the NAS SP benchmark, resulting in improved overlap.

  7. Performance Evaluation Tools for Message-Passing Parallel Problems

    Directory of Open Access Journals (Sweden)

    Wlodzimierz Funika

    1999-01-01

    Full Text Available The article presents a number of issues of designing and implementing performance evolution tools for message-passing parallel programs, eg. MPI and PVM. There is at number ofspecial techniques for investigating parallel programs, whose implementations are tools presented, A concept of performance observability is introduced. Although a number of interesting solutions of performance tools was developed in the course of last decade, there is a great demand for portable and integrated tools. Understanding the reason for this situation requires evaluating the state of art of existing tools, their advantages and drawbacks. Due to a complicated mechanism of interactions between a tool and operating system, computer architecture and application, evaluating the tool includes taking into account a large number of features. There is introduced a set of criteria, which enable a thorough evaluation of tools. based on the work of HPC standardizing organizations as well as the author's work. The second part of the article presents the review of features of the particular tools developed over last decade. The tools are being evaluated on the base of the criteria introduced. The features of the PARNAS performance tool project and its implementation are presented. The summary presents further avenues of inquiry in parallel performance evaluation tools.

  8. Message-Passing Inference on a Factor Graph for Collaborative Filtering

    CERN Document Server

    Kim, Byung-Hak; Pfister, Henry D

    2010-01-01

    This paper introduces a novel message-passing (MP) framework for the collaborative filtering (CF) problem associated with recommender systems. We model the movie-rating prediction problem popularized by the Netflix Prize, using a probabilistic factor graph model and study the model by deriving generalization error bounds in terms of the training error. Based on the model, we develop a new MP algorithm, termed IMP, for learning the model. To show superiority of the IMP algorithm, we compare it with the closely related expectation-maximization (EM) based algorithm and a number of other matrix completion algorithms. Our simulation results on Netflix data show that, while the methods perform similarly with large amounts of data, the IMP algorithm is superior for small amounts of data. This improves the cold-start problem of the CF systems in practice. Another advantage of the IMP algorithm is that it can be analyzed using the technique of density evolution (DE) that was originally developed for MP decoding of err...

  9. Convergent and Correct Message Passing Schemes for Optimization Problems over Graphical Models

    CERN Document Server

    Ruozzi, Nicholas

    2010-01-01

    The max-product algorithm, which attempts to compute the most probable assignment (MAP) of a given probability distribution, has recently found applications in quadratic minimization and combinatorial optimization. Unfortunately, the max-product algorithm is not guaranteed to converge and, even if it does, is not guaranteed to produce the MAP assignment. In this work, we provide a simple derivation of a new family of message passing algorithms. We first show how to arrive at this general message passing scheme by "splitting" the factors of our graphical model and then we demonstrate that this construction can be extended beyond integral splitting. We prove that, for any objective function which attains its maximum value over its domain, this new family of message passing algorithms always contains a message passing scheme that guarantees correctness upon convergence to a unique estimate. We then adopt a serial message passing schedule and prove that, under mild assumptions, such a schedule guarantees the conv...

  10. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  11. High-Performance Message Passing over generic Ethernet Hardware with Open-MX

    OpenAIRE

    Goglin, Brice

    2011-01-01

    International audience; In the last decade, cluster computing has become the most popular high-performance computing architecture. Although numerous technological innovations have been proposed to improve the interconnection of nodes, many clusters still rely on commodity Ethernet hardware to implement message passing within parallel applications. We present Open-MX, an open-source message passing stack over generic Ethernet. It offers the same abilities as the specialized Myrinet Express sta...

  12. Message Passing Based Time Synchronization in Wireless Sensor Networks: A Survey

    OpenAIRE

    2016-01-01

    Various protocols have been proposed in the area of wireless sensor networks in order to achieve network-wide time synchronization. A large number of proposed protocols in the literature employ a message passing mechanism to make sensor node clocks tick in unison. In this paper, we first classify Message Passing based Time Synchronization (MPTS) protocols and then analyze them based on different metrics. The classification is based on the following three criteria: structure formation of the n...

  13. Verification of Faulty Message Passing Systems with Continuous State Space in PVS

    Science.gov (United States)

    Pilotto, Concetta; White, Jerome

    2010-01-01

    We present a library of Prototype Verification System (PVS) meta-theories that verifies a class of distributed systems in which agent commu nication is through message-passing. The theoretic work, outlined in, consists of iterative schemes for solving systems of linear equations , such as message-passing extensions of the Gauss and Gauss-Seidel me thods. We briefly review that work and discuss the challenges in formally verifying it.

  14. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    Science.gov (United States)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  15. Approximation of DAC Codeword Distribution for Equiprobable Binary Sources along Proper Decoding Paths

    CERN Document Server

    Fang, Yong

    2010-01-01

    Distributed Arithmetic Coding (DAC) is an effective implementation of Slepian-Wolf coding, especially for short data blocks. To research its properties, the concept of DAC codeword distribution along proper and wrong decoding paths has been introduced. For DAC codeword distribution of equiprobable binary sources along proper decoding paths, the problem was formatted as solving a system of functional equations. However, up to now, only one closed form was obtained at rate 0.5, while in general cases, to find the closed form of DAC codeword distribution still remains a very difficult task. This paper proposes three kinds of approximation methods for DAC codeword distribution of equiprobable binary sources along proper decoding paths: numeric approximation, polynomial approximation, and Gaussian approximation. Firstly, as a general approach, a numeric method is iterated to find the approximation to DAC codeword distribution. Secondly, at rates lower than 0.5, DAC codeword distribution can be well approximated by...

  16. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde;

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...

  17. Decoding Complexity of Irregular LDGM-LDPC Codes Over the BISOM Channels

    CERN Document Server

    Raina, Manik

    2010-01-01

    An irregular LDGM-LDPC code is studied as a sub-code of an LDPC code with some randomly \\emph{punctured} output-bits. It is shown that the LDGM-LDPC codes achieve rates arbitrarily close to the channel-capacity of the binary-input symmetric-output memoryless (BISOM) channel with bounded \\emph{complexity}. The measure of complexity is the average-degree (per information-bit) of the check-nodes for the factor-graph of the code. A lower-bound on the average degree of the check-nodes of the irregular LDGM-LDPC codes is obtained. The bound does not depend on the decoder used at the receiver. The stability condition for decoding the irregular LDGM-LDPC codes over the binary-erasure channel (BEC) under iterative-decoding with message-passing is described.

  18. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    CERN Document Server

    Sassatelli, Lucile

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-binary parts in their factor graph representation. The class of Hybrid LDPC codes is obviously larger than existing types of codes, which gives more degrees of freedom to find good codes where the existing codes show their limits. We give two examples where hybrid LDPC codes show their interest.

  19. MPWide: a light-weight library for efficient message passing over wide area networks

    Directory of Open Access Journals (Sweden)

    Derek Groen

    2013-12-01

    Full Text Available We present MPWide, a light weight communication library which allows efficient message passing over a distributed network. MPWide has been designed to connect application running on distributed (supercomputing resources, and to maximize the communication performance on wide area networks for those without administrative privileges. It can be used to provide message-passing between application, move files, and make very fast connections in client-server environments. MPWide has already been applied to enable distributed cosmological simulations across up to four supercomputers on two continents, and to couple two different bloodflow simulations to form a multiscale simulation.

  20. A low-memory intensive decoding architecture for double-binary convolutional turbo code

    OpenAIRE

    Zhan, Ming; Zhou, Liang; Wu, Jun

    2014-01-01

    Memory accesses take a large part of the power consumption in the iterative decoding of double-binary convolutional turbo code (DB-CTC). To deal with this, a low-memory intensive decoding architecture is proposed for DB-CTC in this paper. The new scheme is based on an improved maximum a posteriori probability algorithm, where instead of storing all of the state metrics, only a part of these state metrics is stored in the state metrics cache (SMC), and the memory size of the SMC is thus ...

  1. The relationships between message passing, pairwise, Kermack-McKendrick and stochastic SIR epidemic models

    CERN Document Server

    Wilkinson, Robert R; Sharkey, Kieran J

    2016-01-01

    We consider a generalised form of Karrer and Newman's (Phys. Rev. E 82, 016101, 2010) message passing representation of S(E)IR dynamics and show that this, and hence the original system of Karrer and Newman, has a unique feasible solution. The rigorous bounds on the stochastic dynamics, and exact results for trees, first obtained by Karrer and Newman, still hold in this more general setting. We also derive an expression which provides a rigorous lower bound on the variance of the number of susceptibles at any time for trees. By applying the message passing approach to stochastic SIR dynamics on symmetric graphs, we then obtain several key results. Firstly we obtain a low-dimensional message passing system comprising of only four equations. From this system, by assuming that transmission processes are Poisson and independent of the recovery processes, we derive a non-Markovian pairwise model which gives exactly the same infectious time series as the message passing system. Thus, this pairwise model provides th...

  2. Message passing theory for percolation models on multiplex networks with link overlap

    CERN Document Server

    Cellai, Davide; Bianconi, Ginestra

    2016-01-01

    Multiplex networks describe a large variety of complex systems including infrastructures, transportation networks and biological systems. Most of these networks feature a significant link overlap. It is therefore of particular importance to characterize the mutually connected giant component in these networks. Here we provide a message passing theory for characterizing the percolation transition in multiplex networks with link overlap and an arbitrary number of layers $M$. Specifically we propose and compare two message passing algorithms, that generalize the algorithm widely used to study the percolation transition in multiplex networks without link overlap. The first algorithm describes a directed percolation transition and admits an epidemic spreading interpretation. The second algorithm describes the emergence of the mutually connected giant component, that is the percolation transition, but does not preserve the epidemic spreading interpretation. We obtain the phase diagrams for the percolation and direc...

  3. The design of a standard message passing interface for distributed memory concurrent computers

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D.W.

    1993-10-01

    This paper presents an overview of MPI, a proposed standard message passing interface for MIMD distributed memory concurrent computers. The design of MPI has been a collective effort involving researchers in the United States and Europe from many organizations and institutions. MPI includes point-to-point and collective communication routines, as well as support for process groups, communication contexts, and application topologies. While making use of new ideas where appropriate, the MPI standard is based largely on current practice.

  4. Gradient Computation In Linear-Chain Conditional Random Fields Using The Entropy Message Passing Algorithm

    CERN Document Server

    Ilic, Velimir M; Todorovic, Branimir T; Stankovic, Miomir S

    2010-01-01

    The paper proposes a new recursive algorithm for the exact computation of the linear chain conditional random fields gradient. The algorithm is an instance of the Entropy Message Passing (EMP), introduced in our previous work, and has the purpose to enhance memory efficiency when applied to long observation sequences. Unlike the traditional algorithm based on the forward and the backward recursions, the memory complexity of our algorithm does not depend on the sequence length, having the same computational complexity as the standard algorithm.

  5. ScaLAPACK: A linear algebra library for message-passing computers

    Energy Technology Data Exchange (ETDEWEB)

    Blackford, L.S., Cleary, A., Petitet, A., Whaley, R.C., Dongarra, J. [Dept. of Computer Science, Tennessee Univ., Knoxville, TN (United States); Choi, J., [Soongsil University (Korea); D`Azevedo, E. [Mathematical Science Section, Oak Ridge National Lab., TN (United States); Demmel, J., Dhillon, I., Stanley, K. [California Univ., Berkeley, CA (United States). Computer Science Div.; Hammarling, S. [Nag Ltd., (England); Henry, G., Walker, D. [Itel SSPD, Beaverton, OR (United States)

    1997-01-06

    This article outlines the content and performance of some of the ScaLAPACK software. ScaLAPACK is a collection of mathematical software for linear algebra computations on distributed-memory computers. The importance of developing standards for computational and message-passing interfaces is discussed. We present the different components and building blocks of ScaLAPACK and provide initial performance results for selected PBLAS routines and a subset of ScaLAPACK driver routines.

  6. Unsupervised feature learning from finite data by message passing: Discontinuous versus continuous phase transition

    Science.gov (United States)

    Huang, Haiping; Toyoizumi, Taro

    2016-12-01

    Unsupervised neural network learning extracts hidden features from unlabeled training data. This is used as a pretraining step for further supervised learning in deep networks. Hence, understanding unsupervised learning is of fundamental importance. Here, we study the unsupervised learning from a finite number of data, based on the restricted Boltzmann machine where only one hidden neuron is considered. Our study inspires an efficient message-passing algorithm to infer the hidden feature and estimate the entropy of candidate features consistent with the data. Our analysis reveals that the learning requires only a few data if the feature is salient and extensively many if the feature is weak. Moreover, the entropy of candidate features monotonically decreases with data size and becomes negative (i.e., entropy crisis) before the message passing becomes unstable, suggesting a discontinuous phase transition. In terms of convergence time of the message-passing algorithm, the unsupervised learning exhibits an easy-hard-easy phenomenon as the training data size increases. All these properties are reproduced in an approximate Hopfield model, with an exception that the entropy crisis is absent, and only continuous phase transition is observed. This key difference is also confirmed in a handwritten digits dataset. This study deepens our understanding of unsupervised learning from a finite number of data and may provide insights into its role in training deep networks.

  7. Fourier Domain Decoding Algorithm of Non-Binary LDPC codes for Parallel Implementation

    CERN Document Server

    Kasai, Kenta

    2010-01-01

    For decoding non-binary low-density parity check (LDPC) codes, logarithm-domain sum-product (Log-SP) algorithms were proposed for reducing quantization effects of SP algorithm in conjunction with FFT. Since FFT is not applicable in the logarithm domain, the computations required at check nodes in the Log-SP algorithms are computationally intensive. What is worth, check nodes usually have higher degree than variable nodes. As a result, most of the time for decoding is used for check node computations, which leads to a bottleneck effect. In this paper, we propose a Log-SP algorithm in the Fourier domain. With this algorithm, the role of variable nodes and check nodes are switched. The intensive computations are spread over lower-degree variable nodes, which can be efficiently calculated in parallel. Furthermore, we develop a fast calculation method for the estimated bits and syndromes in the Fourier domain.

  8. Implementation of the Message Passing Software Layer over SCI for theATLAS Second Level Trigger Testbeds

    CERN Document Server

    Giacomini, F; Bogaerts, A; Botterill, David R; Middleton, R; Wickens, F J; Werner, P

    2000-01-01

    This document describes how the Message Passing layer ofthe Reference Software for the Second Level Trigger has beenimplemented using the Scalable Coherent Interface (SCI)as the underlying networking technology.

  9. Two Parallel Swendsen-Wang Cluster Algorithms Using Message-Passing Paradigm

    CERN Document Server

    Lin, Shizeng

    2008-01-01

    In this article, we present two different parallel Swendsen-Wang Cluster(SWC) algorithms using message-passing interface(MPI). One is based on Master-Slave Parallel Model(MSPM) and the other is based on Data-Parallel Model(DPM). A speedup of 24 with 40 processors and 16 with 37 processors is achieved with the DPM and MSPM respectively. The speedup of both algorithms at different temperature and system size is carefully examined both experimentally and theoretically, and a comparison of their efficiency is made. In the last section, based on these two parallel SWC algorithms, two parallel probability changing cluster(PCC) algorithms are proposed.

  10. Performance Evaluation of Parallel Message Passing and Thread Programming Model on Multicore Architectures

    CERN Document Server

    Hasta, D T

    2010-01-01

    The current trend of multicore architectures on shared memory systems underscores the need of parallelism. While there are some programming model to express parallelism, thread programming model has become a standard to support these system such as OpenMP, and POSIX threads. MPI (Message Passing Interface) which remains the dominant model used in high-performance computing today faces this challenge. Previous version of MPI which is MPI-1 has no shared memory concept, and Current MPI version 2 which is MPI-2 has a limited support for shared memory systems. In this research, MPI-2 version of MPI will be compared with OpenMP to see how well does MPI perform on multicore / SMP (Symmetric Multiprocessor) machines. Comparison between OpenMP for thread programming model and MPI for message passing programming model will be conducted on multicore shared memory machine architectures to see who has a better performance in terms of speed and throughput. Application used to assess the scalability of the evaluated parall...

  11. MPICH-GQ: quality-of-service for message passing programs

    Energy Technology Data Exchange (ETDEWEB)

    Roy, A.; Foster, I.; Gropp, W.; Karonis, N.; Sander, V.; Toonen, B.

    2000-07-28

    Parallel programmers typically assume that all resources required for a program's execution are dedicated to that purpose. However, in local and wide area networks, contention for shared networks, CPUs, and I/O systems can result in significant variations in availability, with consequent adverse effects on overall performance. The authors describe a new message-passing architecture, MPICH-GQ, that uses quality of service (QoS) mechanisms to manage contention and hence improve performance of message passing interface (MPI) applications. MPICH-GQ combines new QoS specification, traffic shaping, QoS reservation, and QoS implementation techniques to deliver QoS capabilities to the high-bandwidth bursty flows, complex structures, and reliable protocols used in high-performance applications--characteristics very different from the low-bandwidth, constant bit-rate media flows and unreliable protocols for which QoS mechanisms were designed. Results obtained on a differentiated services testbed demonstrate their ability to maintain application performance in the face of heavy network contention.

  12. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  13. Interactive Encoding and Decoding Based on Binary LDPC Codes with Syndrome Accumulation

    CERN Document Server

    Meng, Jin

    2012-01-01

    Interactive encoding and decoding based on binary low-density parity-check codes with syndrome accumulation (SA-LDPC-IED) is proposed and investigated. Assume that the source alphabet is $\\mathbf{GF}(2)$, and the side information alphabet is finite. It is first demonstrated how to convert any classical universal lossless code $\\mathcal{C}_n$ (with block length $n$ and side information available to both the encoder and decoder) into a universal SA-LDPC-IED scheme. It is then shown that with the word error probability approaching 0 sub-exponentially with $n$, the compression rate (including both the forward and backward rates) of the resulting SA-LDPC-IED scheme is upper bounded by a functional of that of $\\mathcal{C}_n$, which in turn approaches the compression rate of $\\mathcal{C}_n$ for each and every individual sequence pair $(x^n,y^n)$ and the conditional entropy rate $\\mathrm{H}(X |Y)$ for any stationary, ergodic source and side information $(X, Y)$ as the average variable node degree $\\bar{l}$ of the und...

  14. New Techniques for Upper-Bounding the ML Decoding Performance of Binary Linear Codes

    CERN Document Server

    Ma, Xiao; Bai, Baoming

    2011-01-01

    In this paper, new techniques are presented to either simplify or improve most existing upper bounds on the maximum-likelihood (ML) decoding performance of the binary linear codes over additive white Gaussian noise (AWGN) channels. Firstly, the recently proposed union bound using truncated weight spectrums by Ma {\\em et al} is re-derived in a detailed way based on Gallager's first bounding technique (GFBT), where the "good region" is specified by a sub-optimal list decoding algorithm. The error probability caused by the bad region can be upper-bounded by the tail-probability of a binomial distribution, while the error probability caused by the good region can be upper-bounded by most existing techniques. Secondly, we propose two techniques to tighten the union bound on the error probability caused by the good region. The first technique is based on pair-wise error probabilities, which can be further tightened by employing the independence between the error events and certain components of the received random ...

  15. A grid-enabled MPI : message passing in heterogeneous distributed computing systems.

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Karonis, N. T.

    2000-11-30

    Application development for high-performance distributed computing systems, or computational grids as they are sometimes called, requires grid-enabled tools that hide mundate aspects of the heterogeneous grid environment without compromising performance. As part of an investigation of these issues, they have developed MPICH-G, a grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers at different sites using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the globus grid toolkit. In this paper, they describe the MPICH-G implementation and present preliminary performance results.

  16. Parallelization of plasma 2-D hydrodynamics code using Message Passing Interface (MPI)

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Akira [Japan Atomic Energy Research Inst., Neyagawa, Osaka (Japan). Kansai Research Establishment

    1997-11-01

    2 dimensional hydrodynamics code using CIP method is parallelized for Intel Paragon XP/S massive parallel computer at Kansai Research Establishment using MPI (Message Passing Interface). The communicator is found to be useful to divide and parallelize programs into functional modules. Using the process topology and the derived data type, large scale finite difference simulation codes can be significantly accelerated with simple coding of the area division method. MPI has functions which simplify the program to process boundary conditions and simplify the communication between adjacent nodes. 357 and 576 times acceleration is obtained for 400 and 782 nodes, respectively. MPI utilizes feature of scalar massive parallel computers with distributed memories. Fast and portable codes can be developed using MPI. (author)

  17. A Message-Passing Hardware/Software Cosimulation Environment for Reconfigurable Computing Systems

    Directory of Open Access Journals (Sweden)

    Manuel Saldaña

    2009-01-01

    Full Text Available High-performance reconfigurable computers (HPRCs provide a mix of standard processors and FPGAs to collectively accelerate applications. This introduces new design challenges, such as the need for portable programming models across HPRCs and system-level verification tools. To address the need for cosimulating a complete heterogeneous application using both software and hardware in an HPRC, we have created a tool called the Message-passing Simulation Framework (MSF. We have used it to simulate and develop an interface enabling an MPI-based approach to exchange data between X86 processors and hardware engines inside FPGAs. The MSF can also be used as an application development tool that enables multiple FPGAs in simulation to exchange messages amongst themselves and with X86 processors. As an example, we simulate a LINPACK benchmark hardware core using an Intel-FSB-Xilinx-FPGA platform to quickly prototype the hardware, to test the communications. and to verify the benchmark results.

  18. Message passing for integrating and assessing renewable generation in a redundant power grid

    Energy Technology Data Exchange (ETDEWEB)

    Zdeborova, Lenka [Los Alamos National Laboratory; Backhaus, Scott [Los Alamos National Laboratory; Chertkov, Michael [Los Alamos National Laboratory

    2009-01-01

    A simplified model of a redundant power grid is used to study integration of fluctuating renewable generation. The grid consists of large number of generator and consumer nodes. The net power consumption is determined by the difference between the gross consumption and the level of renewable generation. The gross consumption is drawn from a narrow distribution representing the predictability of aggregated loads, and we consider two different distributions representing wind and solar resources. Each generator is connected to D consumers, and redundancy is built in by connecting R {le} D of these consumers to other generators. The lines are switchable so that at any instance each consumer is connected to a single generator. We explore the capacity of the renewable generation by determining the level of 'firm' generation capacity that can be displaced for different levels of redundancy R. We also develop message-passing control algorithm for finding switch sellings where no generator is overloaded.

  19. An Algorithm for Static Tracing of Message Passing Interface Programs Using Data Flow Analysis

    Directory of Open Access Journals (Sweden)

    Alaa I. Elnashar

    2014-12-01

    Full Text Available Message Passing Interface (MPI is a well know paradigm that is widely used in coding explicit parallel programs. MPI programs exchange data among parallel processes using communication routines. Program execution trace depends on the way that its processes are communicated together. For the same program, there are a lot of processes transitions states that may appear due to the nondeterministic features of parallel execution. In this paper we present a new algorithm that statically generates the execution trace of a given MPI program using data flow analysis technique. The performance of the proposed algorithm is evaluated and compared with that of two heuristic techniques that use a random and genetic algorithm approaches to generate trace sequences. The results show that the proposed algorithm scales well with the program size and avoids the problem of processes state explosion which the other techniques suffer from.

  20. Parallelization of the TRIGRS model for rainfall-induced landslides using the message passing interface

    Science.gov (United States)

    Alvioli, M.; Baum, R.L.

    2016-01-01

    We describe a parallel implementation of TRIGRS, the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Model for the timing and distribution of rainfall-induced shallow landslides. We have parallelized the four time-demanding execution modes of TRIGRS, namely both the saturated and unsaturated model with finite and infinite soil depth options, within the Message Passing Interface framework. In addition to new features of the code, we outline details of the parallel implementation and show the performance gain with respect to the serial code. Results are obtained both on commercial hardware and on a high-performance multi-node machine, showing the different limits of applicability of the new code. We also discuss the implications for the application of the model on large-scale areas and as a tool for real-time landslide hazard monitoring.

  1. A Message-Passing Algorithm for Counting Short Cycles in a Graph

    CERN Document Server

    Karimi, Mehdi

    2010-01-01

    A message-passing algorithm for counting short cycles in a graph is presented. For bipartite graphs, which are of particular interest in coding, the algorithm is capable of counting cycles of length g, g +2,..., 2g - 2, where g is the girth of the graph. For a general (non-bipartite) graph, cycles of length g; g + 1, ..., 2g - 1 can be counted. The algorithm is based on performing integer additions and subtractions in the nodes of the graph and passing extrinsic messages to adjacent nodes. The complexity of the proposed algorithm grows as $O(g|E|^2)$, where $|E|$ is the number of edges in the graph. For sparse graphs, the proposed algorithm significantly outperforms the existing algorithms in terms of computational complexity and memory requirements.

  2. A message-passing approach for recurrent-state epidemic models on networks

    CERN Document Server

    Shrestha, Munik; Moore, Cristopher

    2015-01-01

    Epidemic processes are common out-of-equilibrium phenomena of broad interdisciplinary interest. Recently, dynamic message-passing (DMP) has been proposed as an efficient algorithm for simulating epidemic models on networks, and in particular for estimating the probability that a given node will become infectious at a particular time. To date, DMP has been applied exclusively to models with one-way state changes, as opposed to models like SIS (susceptible-infectious-susceptible) and SIRS (susceptible-infectious-recovered-susceptible) where nodes can return to previously inhabited states. Because many real-world epidemics can exhibit such recurrent dynamics, we propose a DMP algorithm for complex, recurrent epidemic models on networks. Our approach takes correlations between neighboring nodes into account while preventing causal signals from backtracking to their immediate source, and thus avoids "echo chamber effects" where a pair of adjacent nodes each amplify the probability that the other is infectious. We ...

  3. Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing

    CERN Document Server

    Ziniel, Justin

    2012-01-01

    In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracl...

  4. Message-Passing Multi-Cell Molecular Dynamics on the Connection Machine 5

    CERN Document Server

    Beazley, D M

    1993-01-01

    We present a new scalable algorithm for short-range molecular dynamics simulations on distributed memory MIMD multicomputer based on a message-passing multi-cell approach. We have implemented the algorithm on the Connection Machine 5 (CM-5) and demonstrate that meso-scale molecular dynamics with more than $10^8$ particles is now possible on massively parallel MIMD computers. Typical runs show single particle update-times of $0.15 \\mu s$ in 2 dimensions (2D) and approximately $1 \\mu s$ in 3 dimensions (3D) on a 1024 node CM-5 without vector units, corresponding to more than 1.8 GFlops overall performance. We also present a scaling equation which agrees well with actually observed timings.

  5. Message passing for integrating and assessing renewable generation in a redundant power grid

    Energy Technology Data Exchange (ETDEWEB)

    Zdeborova, Lenka [Los Alamos National Laboratory; Backhaus, Scott [Los Alamos National Laboratory; Chertkov, Michael [Los Alamos National Laboratory

    2009-01-01

    A simplified model of a redundant power grid is used to study integration of fluctuating renewable generation. The grid consists of large number of generator and consumer nodes. The net power consumption is determined by the difference between the gross consumption and the level of renewable generation. The gross consumption is drawn from a narrow distribution representing the predictability of aggregated loads, and we consider two different distributions representing wind and solar resources. Each generator is connected to D consumers, and redundancy is built in by connecting R {le} D of these consumers to other generators. The lines are switchable so that at any instance each consumer is connected to a single generator. We explore the capacity of the renewable generation by determining the level of 'firm' generation capacity that can be displaced for different levels of redundancy R. We also develop message-passing control algorithm for finding switch sellings where no generator is overloaded.

  6. A Modified Nonparametric Message Passing Algorithm for Soft Iterative Channel Estimation

    Directory of Open Access Journals (Sweden)

    Linlin Duan

    2013-08-01

    Full Text Available Based on the factor graph framework, we derived a Modified Nonparametric Message Passing Algorithm (MNMPA for soft iterative channel estimation in a Low Density Parity-Check (LDPC coded Bit-Interleaved Coded Modulation (BICM system. The algorithm combines ideas from Particle Filtering (PF with popular factor graph techniques. A Markov Chain Monte Carlo (MCMC move step is added after typical sequential Important Sampling (SIS -resampling to prevent particle impoverishment and to improve channel estimation precision. To reduce complexity, a new max-sum rule for updating particle based messages is reformulated and two proper update schedules are designed. Simulation results illustrate the effectiveness of MNMPA and its comparison with other sum-product algorithms in a Gaussian or non-Gaussian noise environment. We also studied the effect of the particle number, pilot symbol spacing and different schedules on BER performance.

  7. Message-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation

    CERN Document Server

    Kumar, Akshat

    2012-01-01

    Computing maximum a posteriori (MAP) estimation in graphical models is an important inference problem with many applications. We present message-passing algorithms for quadratic programming (QP) formulations of MAP estimation for pairwise Markov random fields. In particular, we use the concave-convex procedure (CCCP) to obtain a locally optimal algorithm for the non-convex QP formulation. A similar technique is used to derive a globally convergent algorithm for the convex QP relaxation of MAP. We also show that a recently developed expectation-maximization (EM) algorithm for the QP formulation of MAP can be derived from the CCCP perspective. Experiments on synthetic and real-world problems confirm that our new approach is competitive with max-product and its variations. Compared with CPLEX, we achieve more than an order-of-magnitude speedup in solving optimally the convex QP relaxation.

  8. Message-passing algorithm for two-dimensional dependent bit allocation

    Science.gov (United States)

    Sagetong, Phoom; Ortega, Antonio

    2003-05-01

    We address the bit allocation problem in scenarios where there exist two-dimensional (2D) dependencies in the bit allocation, i.e., where the allocation involves a 2D set of coding units (e.g., DCT blocks in standard MPEG coding) and where the rate-distortion (RD) characteristics of each coding unit depend on one or more of the other coding units. These coding units can be located anywhere in 2D space. As an example we consider MPEG-4 intra-coding where, in order to further reduce the redundancy between coefficients, both the DC and certain of the AC coefficients of each block are predicted from the corresponding coefficients in either the previous block in the same line (to the left) or the one above the current block. To find the optimal solution may be a time-consuming problem, given that the RD characteristics of each block depend on those of the neighbors. Greedy search approaches are popular due to their low complexity and low memory consumption, but they may be far from optimal due to the dependencies in the coding. In this work, we propose an iterative message-passing technique to solve 2D dependent bit allocation problems. This technique is based on (i) Soft-in/Soft-out (SISO) algorithms first used in the context of Turbo codes, (ii) a grid model, and (iii) Lagrangian optimization techniques. In order to solve this problem our approach is to iteratively compute the soft information of a current DCT block (intrinsic information) and pass the soft decision (extrinsic information) to other nearby DCT block(s). Since the computational complexity is also dominated by the data generation phase, i.e., in the Rate-Distortion (RD) data population process, we introduce an approximation method to eliminate the need to generate the entire set of RD points. Experimental studies reveal that the system that uses the proposed message-passing algorithm is able to outperform the greedy search approach by 0.57 dB on average. We also show that the proposed algorithm requires

  9. Supporting the Development of Soft-Error Resilient Message Passing Applications using Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2016-01-01

    Radiation-induced bit flip faults are of particular concern in extreme-scale high-performance computing systems. This paper presents a simulation-based tool that enables the development of soft-error resilient message passing applications by permitting the investigation of their correctness and performance under various fault conditions. The documented extensions to the Extreme-scale Simulator (xSim) enable the injection of bit flip faults at specific of injection location(s) and fault activation time(s), while supporting a significant degree of configurability of the fault type. Experiments show that the simulation overhead with the new feature is ~2,325% for serial execution and ~1,730% at 128 MPI processes, both with very fine-grain fault injection. Fault injection experiments demonstrate the usefulness of the new feature by injecting bit flips in the input and output matrices of a matrix-matrix multiply application, revealing vulnerability of data structures, masking and error propagation. xSim is the very first simulation-based MPI performance tool that supports both, the injection of process failures and bit flip faults.

  10. Scampi: a robust approximate message-passing framework for compressive imaging

    Science.gov (United States)

    Barbier, Jean; Tramel, Eric W.; Krzakala, Florent

    2016-03-01

    Reconstruction of images from noisy linear measurements is a core problem in image processing, for which convex optimization methods based on total variation (TV) minimization have been the long-standing state-of-the-art. We present an alternative probabilistic reconstruction procedure based on approximate message-passing, Scampi, which operates in the compressive regime, where the inverse imaging problem is underdetermined. While the proposed method is related to the recently proposed GrAMPA algorithm of Borgerding, Schniter, and Rangan, we further develop the probabilistic approach to compressive imaging by introducing an expectation-maximization learning of model parameters, making the Scampi robust to model uncertainties. Additionally, our numerical experiments indicate that Scampi can provide reconstruction performance superior to both GrAMPA as well as convex approaches to TV reconstruction. Finally, through exhaustive best-case experiments, we show that in many cases the maximal performance of both Scampi and convex TV can be quite close, even though the approaches are a prori distinct. The theoretical reasons for this correspondence remain an open question. Nevertheless, the proposed algorithm remains more practical, as it requires far less parameter tuning to perform optimally.

  11. Using Partial Reconfiguration and Message Passing to Enable FPGA-Based Generic Computing Platforms

    Directory of Open Access Journals (Sweden)

    Manuel Saldaña

    2012-01-01

    Full Text Available Partial reconfiguration (PR is an FPGA feature that allows the modification of certain parts of an FPGA while the rest of the system continues to operate without disruption. This distinctive characteristic of FPGAs has many potential benefits but also challenges. The lack of good CAD tools and the deep hardware knowledge requirement result in a hard-to-use feature. In this paper, the new partition-based Xilinx PR flow is used to incorporate PR within our MPI-based message-passing framework to allow hardware designers to create template bitstreams, which are predesigned, prerouted, generic bitstreams that can be reused for multiple applications. As an example of the generality of this approach, four different applications that use the same template bitstream are run consecutively, with a PR operation performed at the beginning of each application to instantiate the desired application engine. We demonstrate a simplified, reusable, high-level, and portable PR interface for X86-FPGA hybrid machines. PR issues such as local resets of reconfigurable modules and context saving and restoring are addressed in this paper followed by some examples and preliminary PR overhead measurements.

  12. MPICH-G2 : a grid-enabled implementation of the message passing interface.

    Energy Technology Data Exchange (ETDEWEB)

    Karonis, N. T.; Toonen, B.; Foster, I.; Mathematics and Computer Science; Northern Illinois Univ.; Univ. of Chicago

    2003-05-01

    Application development for distributed-computing 'Grids' can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I/O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.

  13. Monitoring Data-Structure Evolution in Distributed Message-Passing Programs

    Science.gov (United States)

    Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)

    1996-01-01

    Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.

  14. Development of a low-latency scalar communication routine on message-passing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Pai, R.

    1994-01-11

    One of the most significant advances in computer systems over the past decade is parallel processing. To be scalable to a large number of processing nodes and to be able to support multiple levels and forms of parallelism and its flexible use, new parallel machines have to be multicomputer architectures that have general networking support and extremely low internode communication latencies. The performance of a program when ported to a parallel machine is limited mainly by the internode communication latencies of the machine. Therefore, the best parallel applications are those that seldom require communications which must be routed through the nodes. Thus the ratio of computation time to that of communication time is what determines, to a large extent, the performance metrics of an algorithm. The cost of synchronization and load imbalance appear secondary to that of the time required for internode communication and I/O, for communication intensive applications. This thesis is organized in chapters. The first chapter deals with the communication strategies in various message-passing computers. A taxonomy of inter-node communication strategies is presented in the second chapter and a comparison of the strategies in some existing machines is done. The implementation of communication in nCUBE Vertex O.S is explained in the third chapter. The fourth chapter deals with the communication routines in the Vertex O.S, and the last chapter explains the development and implementation of the scalar communication call. Finally some conclusions are presented.

  15. Development of a low-latency scalar communication routine on message-passing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Pai, Rekha [Iowa State Univ., Ames, IA (United States)

    1994-01-11

    One of the most significant advances in computer systems over the past decade is parallel processing. To be scalable to a large number of processing nodes and to be able to support multiple levels and forms of parallelism and its flexible use, new parallel machines have to be multicomputer architectures that have general networking support and extremely low internode communication latencies. The performance of a program when ported to a parallel machine is limited mainly by the internode communication latencies of the machine. Therefore, the best parallel applications are those that seldom require communications which must be routed through the nodes. Thus the ratio of computation time to that of communication time is what determines, to a large extent, the performance metrics of an algorithm. The cost of synchronization and load imbalance appear secondary to that of the time required for internode communication and I/O, for communication intensive applications. This thesis is organized in chapters. The first chapter deals with the communication strategies in various message-passing computers. A taxonomy of inter-node communication strategies is presented in the second chapter and a comparison of the strategies in some existing machines is done. The implementation of communication in nCUBE Vertex O.S is explained in the third chapter. The fourth chapter deals with the communication routines in the Vertex O.S, and the last chapter explains the development and implementation of the scalar communication call. Finally some conclusions are presented.

  16. Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications

    Science.gov (United States)

    Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka

    2017-07-01

    This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.

  17. Implementation of a Message Passing Interface into a Cloud-Resolving Model for Massively Parallel Computing

    Science.gov (United States)

    Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve

    2004-01-01

    The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.

  18. Error Correction Coding Meets Cyber-Physical Systems: Message-Passing Analysis of Self-Healing Interdependent Networks

    CERN Document Server

    Behfarnia, Ali

    2016-01-01

    Coupling cyber and physical systems gives rise to numerous engineering challenges and opportunities. An important challenge is the contagion of failure from one system to another, which can lead to large-scale cascading failures. However, the \\textit{self-healing} ability emerges as a valuable opportunity where the overlaying cyber network can cure failures in the underlying physical network. To capture both self-healing and contagion, this paper considers a graphical model representation of an interdependent cyber-physical system, in which nodes represent various cyber or physical functionalities, and edges capture the interactions between the nodes. A message-passing algorithm used in low-density parity-check codes is extended to this representation to study the dynamics of failure propagation and healing. By applying a density evolution analysis to this algorithm, network reaction to initial disruptions is investigated. It is proved that as the number of message-passing iterations increases, the network re...

  19. Parallel finite difference beam propagation method based on message passing interface: application to MMI couplers with two-dimensional confinement

    Institute of Scientific and Technical Information of China (English)

    Chaojun Yan; Wenbiao Peng; Haijun Li

    2007-01-01

    @@ The alternate-direction implicit finite difference beam propagation method (FD-BPM) is used to analyze the two-dimensional (2D) symmetrical multimode interference (MMI) couplers. The positions of the images at the output plane and the length of multimode waveguide are accurately determined numerically. In order to reduce calculation time, the parallel processing of the arithmetic is implemented by the message passing interface and the simulation is accomplished by eight personal computers.

  20. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    Science.gov (United States)

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  1. Distributed primal–dual interior-point methods for solving tree-structured coupled convex problems using message-passing

    DEFF Research Database (Denmark)

    Khoshfetrat Pakazad, Sina; Hansson, Anders; Andersen, Martin S.;

    2016-01-01

    In this paper, we propose a distributed algorithm for solving coupled problems with chordal sparsity or an inherent tree structure which relies on primal–dual interior-point methods. We achieve this by distributing the computations at each iteration, using message-passing. In comparison to existing...... distributed algorithms for solving such problems, this algorithm requires far fewer iterations to converge to a solution with high accuracy. Furthermore, it is possible to compute an upper-bound for the number of required iterations which, unlike existing methods, only depends on the coupling structure...... in the problem. We illustrate the performance of our proposed method using a set of numerical examples....

  2. Improved Iterative Hard- and Soft-Reliability Based Majority-Logic Decoding Algorithms for Non-Binary Low-Density Parity-Check Codes

    Science.gov (United States)

    Xiong, Chenrong; Yan, Zhiyuan

    2014-10-01

    Non-binary low-density parity-check (LDPC) codes have some advantages over their binary counterparts, but unfortunately their decoding complexity is a significant challenge. The iterative hard- and soft-reliability based majority-logic decoding algorithms are attractive for non-binary LDPC codes, since they involve only finite field additions and multiplications as well as integer operations and hence have significantly lower complexity than other algorithms. In this paper, we propose two improvements to the majority-logic decoding algorithms. Instead of the accumulation of reliability information in the existing majority-logic decoding algorithms, our first improvement is a new reliability information update. The new update not only results in better error performance and fewer iterations on average, but also further reduces computational complexity. Since existing majority-logic decoding algorithms tend to have a high error floor for codes whose parity check matrices have low column weights, our second improvement is a re-selection scheme, which leads to much lower error floors, at the expense of more finite field operations and integer operations, by identifying periodic points, re-selecting intermediate hard decisions, and changing reliability information.

  3. Main-Branch Structure Iterative Detection Using Approximate Message Passing for Uplink Large-Scale Multiuser MIMO Systems

    Directory of Open Access Journals (Sweden)

    Qifeng Zou

    2016-01-01

    Full Text Available The emerging large-scale/massive multi-input multioutput (MIMO system combined with orthogonal frequency division multiplexing (OFDM is considered a key technology for its advantage of improving the spectral efficiency. In this paper, we introduce an iterative detection algorithm for uplink large-scale multiuser MIMO-OFDM communication systems. We design a Main-Branch structure iterative turbo detector using the Approximate Message Passing algorithm simplified by linear approximation (AMP-LA and using the Mean Square Error (MSE criterion to calculate the correlation coefficients between main detector and branch detector for the given iteration. The complexity of our method is compared with other detection algorithms. The simulation results show that our scheme can achieve better performance than the conventional detection methods and have the acceptable complexity.

  4. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  5. Standard and Chineses Strategy of MPI(Message-Passing Interface)%MPI(消息传输界面)的标准化与我国的对策

    Institute of Scientific and Technical Information of China (English)

    苏运霖

    2002-01-01

    The motivation of posing the message-passing interface and thereby the development of the standardization of it are surveyed in the paper.Then the goals and the contents are presented.Finally the paper discusses bow would china deal with the issue.

  6. A Study on the Effect of Communication Performance on Message-Passing Parallel Programs: Methodology and Case Studies

    Science.gov (United States)

    Sarukkai, Sekhar R.; Yan, Jerry; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    From a source-program perspective, the performance achieved on distributed/parallel systems is governed by the underlying message-passing library overhead and the network capabilities of the architecture. Studying the impact of changes in these features on the source-program. can have a significant influence in the development of next-generation system designs. In this paper we introduce a simple and robust tool that can be used for this purpose. This tool is based on event-driven simulation of programs that generates a new set of trace events - that preserves causality and partial order - corresponding to the expected execution of the program in the simulated environment. Trace events can be visualized and source-level profile information can be used to pin-point locations of program which are most significantly affected with changing system parameters in the simulated environment. We present a number of examples from the NAS benchmark suite, executed on the Intel Paragon and iPSC/860 that are used to identify and expose performance bottlenecks with varying system parameters. Specific aspects of the system that significantly effect these benchmarks are presented and discussed,

  7. A Study on the Effect of Communication Performance on Message-Passing Parallel Programs: Methodology and Case Studies

    Science.gov (United States)

    Sarukkai, Sekhar R.; Yan, Jerry; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    From a source-program perspective, the performance achieved on distributed/parallel systems is governed by the underlying message-passing library overhead and the network capabilities of the architecture. Studying the impact of changes in these features on the source-program. can have a significant influence in the development of next-generation system designs. In this paper we introduce a simple and robust tool that can be used for this purpose. This tool is based on event-driven simulation of programs that generates a new set of trace events - that preserves causality and partial order - corresponding to the expected execution of the program in the simulated environment. Trace events can be visualized and source-level profile information can be used to pin-point locations of program which are most significantly affected with changing system parameters in the simulated environment. We present a number of examples from the NAS benchmark suite, executed on the Intel Paragon and iPSC/860 that are used to identify and expose performance bottlenecks with varying system parameters. Specific aspects of the system that significantly effect these benchmarks are presented and discussed,

  8. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    Science.gov (United States)

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives.

  9. QLWFPC2: Parallel-Processing Quick-Look WFPC2 Stellar Photometry Based on the Message Passing Interface

    Science.gov (United States)

    Mighell, K. J.

    2004-07-01

    I describe a new parallel-processing stellar photometry code called QLWFPC2 (http://www.noao.edu/staff/mighell/qlwfpc2) which is designed to do quick-look analysis of two entire WFPC2 observations from the Hubble Space Telescope in under 5 seconds using a fast Beowulf cluster with a Gigabit-Ethernet local network. This program is written in ANSI C and uses MPICH implementation of the Message Passing Interface from the Argonne National Laboratory for the parallel-processing communications, the CFITSIO library (from HEASARC at NASA's GSFC) for reading the standard FITS files from the HST Data Archive, and the Parameter Interface Library (from the INTEGRAL Science Data Center) for the IRAF parameter-file user interface. QLWFPC2 running on 4 processors takes about 2.4 seconds to analyze the WFPC2 archive datasets u37ga407r.c0.fits (F555W; 300 s) and u37ga401r.c0.fits (F814W; 300 s) of M54 (NGC 6715) which is the bright massive globular cluster near the center of the nearby Sagittarius dwarf spheroidal galaxy. The analysis of these HST observations of M54 lead to the serendipitous discovery of more than 50 new bright variable stars in the central region of M54. Most of the candidate variables stars are found on the PC1 images of the cluster center --- a region where no variables have been reported by previous ground-based studies of variables in M54. This discovery is an example of how QLWFPC2 can be used to quickly explore the time domain of observations in the HST Data Archive.

  10. Implementation and optimization of Message Passing Interface-based Warshall algorithm%Warshall算法在MPI中的实现及优化

    Institute of Scientific and Technical Information of China (English)

    赵岩; 郭善良; 佘玲玲

    2007-01-01

    Message Passing Interface并行编程方法是目前编程人员广泛使用的方法之一,但此方法将并行性开发的任务完全交给编程人员,程序的质量与效率往往因编程人员水平及风格不同而各异.在Message Passing Interface环境下把传统串行程序转变为并行程序从而提高其性能.此外通过MPI所提供的函数来进一步优化并行程序以便提高其性能.

  11. Statistical Physics, Optimization, Inference, and Message-Passing Algorithms : Lecture Notes of the Les Houches School of Physics : Special Issue, October 2013

    CERN Document Server

    Ricci-Tersenghi, Federico; Zdeborova, Lenka; Zecchina, Riccardo; Tramel, Eric W; Cugliandolo, Leticia F

    2015-01-01

    This book contains a collection of the presentations that were given in October 2013 at the Les Houches Autumn School on statistical physics, optimization, inference, and message-passing algorithms. In the last decade, there has been increasing convergence of interest and methods between theoretical physics and fields as diverse as probability, machine learning, optimization, and inference problems. In particular, much theoretical and applied work in statistical physics and computer science has relied on the use of message-passing algorithms and their connection to the statistical physics of glasses and spin glasses. For example, both the replica and cavity methods have led to recent advances in compressed sensing, sparse estimation, and random constraint satisfaction, to name a few. This book’s detailed pedagogical lectures on statistical inference, computational complexity, the replica and cavity methods, and belief propagation are aimed particularly at PhD students, post-docs, and young researchers desir...

  12. TP Decoding

    CERN Document Server

    Lu, Yi; Montanari, Andrea

    2007-01-01

    'Tree pruning' (TP) is an algorithm for probabilistic inference on binary Markov random fields. It has been recently derived by Dror Weitz and used to construct the first fully polynomial approximation scheme for counting independent sets up to the `tree uniqueness threshold.' It can be regarded as a clever method for pruning the belief propagation computation tree, in such a way to exactly account for the effect of loops. In this paper we generalize the original algorithm to make it suitable for decoding linear codes, and discuss various schemes for pruning the computation tree. Further, we present the outcomes of numerical simulations on several linear codes, showing that tree pruning allows to interpolate continuously between belief propagation and maximum a posteriori decoding. Finally, we discuss theoretical implications of the new method.

  13. An Updating Decoding Algorithm of Non Binary LDPC Code%一种改进的多元LDPC译码算法

    Institute of Scientific and Technical Information of China (English)

    孙琢; 洪海丽

    2011-01-01

    An improved decoding algorithm is presented in order to solve the problem of inconvenient for hardware implementation in decoding algorithm for non-binary LDPC codes based on the FFT-BP algorithm. The innovation of the new algorithm is the importing of the logarithmic operations, which will transform multiplication operations to addition operations in the logarithmic domains. Thus the presented algorithm has the advantages of the reduced complexity and the more convenient hardware implementation. A simulation is made using regular non binary LDPC (codes(486,972),code rate 0. 5)under White Gaussian Noise channel based on GF(4). The result shows that the decoding complexity is much more reduced with the performance decrease by about 0. 07 Db when BER(bit error rate )is 10~4%为解决多进制LDPC码基于FFT-BP译码算法不利于硬件实现的问题,提出了一种改进算法:利用对数运算,将乘法运算变换成对数域上的加法运算,从而降低复杂度,便于硬件实现.对该算法在高斯白噪声信道,基于GF(4)有限域、码率0.5的规则LDPC码(486,972)进行了仿真分析.结果显示:改进的FFTBP译码算法相对传统的FFT-BP译码算法,在误码性能上损失极小(当误码率10-4时,信噪比损失大约0.07 dB)情况下,能够使译码算法硬件复杂度得到较大的改善.

  14. MPPIE:RDFS Parallel Inference Framework Based on Message Passing%MPPIE:基于消息传递的RDFS并行推理框架

    Institute of Scientific and Technical Information of China (English)

    吕小玲; 王鑫; 冯志勇; 饶国政; 张小旺; 许光全

    2016-01-01

    随着语义Web的快速发展,RDF(resource description framework)语义数据规模呈现爆炸性增长趋势,大规模语义数据上的推理工作面临严峻挑战。基于消息传递机制提出了一种新的RDFS(RDF schema)并行推理方案。利用RDF图数据结构,建立RDFS推理过程的图上加边模型。以顶点为计算中心,根据不同推理模型,向其他顶点传递推理消息,完成推理操作。当所有推导出的新三元组以边的形式加入原RDF图中时,整个推理过程结束。在基于消息传递模型的开源框架Giraph上,实现了RDFS并行推理框架MPPIE(message passing parallel inference engine)。实验结果表明,在标准数据集LUBM和真实数据集DBpedia上,MPPIE执行速度均比当前性能最好的语义推理引擎WebPIE快一个数量级,且展现了良好的可伸展性。%Reasoning over semantic data poses a challenge, since large volumes of RDF (resource description frame-work) data have been published with the rapid development of the Semantic Web. This paper proposes an RDFS (RDF schema) parallel inference framework based on message passing mechanism. The graph structure of RDF data is exploited to abstract inference process to an edge addition model. Vertices execute the parallel inference algorithm, which can send reasoning messages to other vertices to complete inference process. When all derivations are regarded as new edges of initial RDF graph, the computation terminates. MPPIE (message passing parallel inference engine), the RDFS parallel inference framework, is implemented on top of open source framework Giraph. The experimental results on both benchmark dataset LUBM and real world dataset DBpedia show that the performance of the proposed method outperforms WebPIE, the state-of-art semantic scalable inference engine. Furthermore, the proposed method provides good scalability.

  15. Message-passing-interface-based parallel FDTD investigation on the EM scattering from a 1-D rough sea surface using uniaxial perfectly matched layer absorbing boundary.

    Science.gov (United States)

    Li, J; Guo, L-X; Zeng, H; Han, X-B

    2009-06-01

    A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.

  16. JSATS Decoder Software Manual

    Energy Technology Data Exchange (ETDEWEB)

    Flory, Adam E.; Lamarche, Brian L.; Weiland, Mark A.

    2013-05-01

    The Juvenile Salmon Acoustic Telemetry System (JSATS) Decoder is a software application that converts a digitized acoustic signal (a waveform stored in the .bwm file format) into a list of potential JSATS Acoustic MicroTransmitter (AMT) tagcodes along with other data about the signal including time of arrival and signal to noise ratios (SNR). This software is capable of decoding single files, directories, and viewing raw acoustic waveforms. When coupled with the JSATS Detector, the Decoder is capable of decoding in ‘real-time’ and can also provide statistical information about acoustic beacons placed within receive range of hydrophones within a JSATS array. This document details the features and functionality of the software. The document begins with software installation instructions (section 2), followed in order by instructions for decoder setup (section 3), decoding process initiation (section 4), then monitoring of beacons (section 5) using real-time decoding features. The last section in the manual describes the beacon, beacon statistics, and the results file formats. This document does not consider the raw binary waveform file format.

  17. Enhancing Binary Images of Non-Binary LDPC Codes

    CERN Document Server

    Bhatia, Aman; Siegel, Paul H

    2011-01-01

    We investigate the reasons behind the superior performance of belief propagation decoding of non-binary LDPC codes over their binary images when the transmission occurs over the binary erasure channel. We show that although decoding over the binary image has lower complexity, it has worse performance owing to its larger number of stopping sets relative to the original non-binary code. We propose a method to find redundant parity-checks of the binary image that eliminate these additional stopping sets, so that we achieve performance comparable to that of the original non-binary LDPC code with lower decoding complexity.

  18. A Cost-Efficient LDPC Decoder for DVB-S2 with the Solution to Address Conflict Issue

    Science.gov (United States)

    Ying, Yan; Bao, Dan; Yu, Zhiyi; Zeng, Xiaoyang; Chen, Yun

    In this paper, a cost-efficient LDPC decoder for DVB-S2 is presented. Based on the Normalized Min-Sum algorithm and the turbo-decoding message-passing (TDMP) algorithm, a dual line-scan scheduling is proposed to enable hardware reusing. Furthermore, we present the solution to the address conflict issue caused by the characteristic of the parity-check matrix defined by DVB-S2 LDPC codes. Based on SMIC 0.13µm standard CMOS process, the LDPC decoder has an area of 12.51mm2. The required operating frequency to meet the throughput requirement of 135Mbps with maximum iteration number of 30 is 105MHz. Compared with the latest published DVB-S2 LDPC decoder, the proposed decoder reduces area cost by 34%.

  19. An efficient CDMA decoder for correlated information sources

    CERN Document Server

    Efraim, Hadar; Shental, Ori; Kanter, Ido

    2010-01-01

    We consider the detection of correlated information sources in the ubiquitous Code-Division Multiple-Access (CDMA) scheme. We propose a message-passing based scheme for detecting correlated sources directly, with no need for source coding. The detection is done simultaneously over a block of transmitted binary symbols (word). Simulation results are provided demonstrating a substantial improvement in bit-error-rate in comparison with the unmodified detector and the alternative of source compression. The robustness of the error-performance improvement is shown under practical model settings, including wrong estimation of the generating Markov transition matrix and finite-length spreading codes.

  20. High Hardware Utilization and Low Memory Block Requirement Decoding of QC-LDPC Codes

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ling; LIU Rongke; HOU Yi; ZHANG Xiaolin

    2012-01-01

    This paper presents a simple yet effective decoding for general quasi-cyclic low-density parity-check (QC-LDPC) codes,which not only achieves high hardware utility efficiency (HUE),but also brings about great memory block reduction without any performance degradation.The main idea is to split the check matrix into several row blocks,then to perform the improved message passing computations sequentially block by block.As the decoding algorithm improves,the sequential tie between the two-phase computations is broken,so that the two-phase computations can be overlapped which bring in high HUE.Two overlapping schemes are also presented,each of which suits a different situation.In addition,an efficient memory arrangement scheme is proposed to reduce the great memory block requirement of the LDPC decoder.As an example,for the 0.4 rate LDPC code selected from Chinese Digital TV Terrestrial Broadcasting (DTTB),our decoding saves over 80% memory blocks compared with the conventional decoding,and the decoder achieves 0.97 HUE.Finally,the 0.4 rate LDPC decoder is implemented on an FPGA device EP2S30 (speed grade-5).Using 8 row processing units,the decoder can achieve a maximum net throughput of 28.5 Mbps at 20 iterations.

  1. Analysis of peeling decoder for MET ensembles

    CERN Document Server

    Hinton, Ryan

    2009-01-01

    The peeling decoder introduced by Luby, et al. allows analysis of LDPC decoding for the binary erasure channel (BEC). For irregular ensembles, they analyze the decoder state as a Markov process and present a solution to the differential equations describing the process mean. Multi-edge type (MET) ensembles allow greater precision through specifying graph connectivity. We generalize the the peeling decoder for MET ensembles and derive analogous differential equations. We offer a new change of variables and solution to the node fraction evolutions in the general (MET) case. This result is preparatory to investigating finite-length ensemble behavior.

  2. A Bidirectional Decoding Architecture for Double Binary Convolutional Turbo Code Based on Symmetry%一种基于对称性的双向双二进制卷积Iurbo码译码结构研究

    Institute of Scientific and Technical Information of China (English)

    詹明; 周亮

    2012-01-01

    该文提出了一种基于对称性的双向并行译码方案,用于提高802.16 m标准中双二进制卷积Turbo码(DB CTC)的译码速度.定义了分支度量矩阵以降低译码计算复杂度,定义了前向、后向因子矩阵,推导了前、后向度量递归计算中的对称性,并将其应用于前向、后向度量及后验概率对数似然比的双向并行计算中.构造了采用该方案的DB CTC译码器结构图,详细分析了迭代过程.以计算复杂度,存储空间,译码速度为指标考察了方案的性能,并给出了译码性能仿真曲线.分析表明,该双向并行的译码方法较常规方法提高了一倍的译码速度,而没有增加计算复杂度和存储空间.%This paper presents a bidirectional decoding architecture for double binary convolutional turbo code specified in IEEE 802.16m standard to accelerate its decoding speed. The branch metrics matrix is defined to decrease the decoding computational complexity. Furthermore, the forward factor matrix and backward factor matrix are defined, and then the symmetrical property between them is discussed and applied to the bidirectional calculation of forward metrics, backward metrics and a posteriori log-likelihood ratio. Detailed analyses are presented in terms of computational complexity, memory size, decoding speed and bit error rate, which show that computational complexity and memory size of the proposed architecture are not increased, while the decoding speed is one time accelerated as compared with the conventional scheme.

  3. Trellis-Based Check Node Processing for Low-Complexity Nonbinary LP Decoding

    CERN Document Server

    Punekar, Mayur

    2011-01-01

    Linear Programming (LP) decoding is emerging as an attractive alternative to decode Low-Density Parity-Check (LDPC) codes. However, the earliest LP decoders proposed for binary and nonbinary LDPC codes are not suitable for use at moderate and large code lengths. To overcome this problem, Vontobel et al. developed an iterative Low-Complexity LP (LCLP) decoding algorithm for binary LDPC codes. The variable and check node calculations of binary LCLP decoding algorithm are related to those of binary Belief Propagation (BP). The present authors generalized this work to derive an iterative LCLP decoding algorithm for nonbinary linear codes. Contrary to binary LCLP, the variable and check node calculations of this algorithm are in general different from that of nonbinary BP. The overall complexity of nonbinary LCLP decoding is linear in block length; however the complexity of its check node calculations is exponential in the check node degree. In this paper, we propose a modified BCJR algorithm for efficient check n...

  4. LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing

    CERN Document Server

    Dimakis, Alexandros G

    2009-01-01

    This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.

  5. Analysis of the Sufficient Path Elimination Window for the Maximum-Likelihood Sequential-Search Decoding Algorithm for Binary Convolutional Codes

    CERN Document Server

    Shieh, Shin-Lin; Han, Yunghsiang S

    2007-01-01

    A common problem on sequential-type decoding is that at the signal-to-noise ratio (SNR) below the one corresponding to the cutoff rate, the average decoding complexity per information bit and the required stack size grow rapidly with the information length. In order to alleviate the problem in the maximum-likelihood sequential decoding algorithm (MLSDA), we propose to directly eliminate the top path whose end node is $\\Delta$-trellis-level prior to the farthest one among all nodes that have been expanded thus far by the sequential search. Following random coding argument, we analyze the early-elimination window $\\Delta$ that results in negligible performance degradation for the MLSDA. Our analytical results indicate that the required early elimination window for negligible performance degradation is just twice of the constraint length for rate one-half convolutional codes. For rate one-third convolutional codes, the required early-elimination window even reduces to the constraint length. The suggestive theore...

  6. Decode-and-Forward Based Differential Modulation for Cooperative Communication System with Unitary and Non-Unitary Constellations

    CERN Document Server

    Bhatnagar, Manav R

    2012-01-01

    In this paper, we derive a maximum likelihood (ML) decoder of the differential data in a decode-and-forward (DF) based cooperative communication system utilizing uncoded transmissions. This decoder is applicable to complex-valued unitary and non-unitary constellations suitable for differential modulation. The ML decoder helps in improving the diversity of the DF based differential cooperative system using an erroneous relaying node. We also derive a piecewise linear (PL) decoder of the differential data transmitted in the DF based cooperative system. The proposed PL decoder significantly reduces the decoding complexity as compared to the proposed ML decoder without any significant degradation in the receiver performance. Existing ML and PL decoders of the differentially modulated uncoded data in the DF based cooperative communication system are only applicable to binary modulated signals like binary phase shift keying (BPSK) and binary frequency shift keying (BFSK), whereas, the proposed decoders are applicab...

  7. Divide & Concur and Difference-Map BP Decoders for LDPC Codes

    CERN Document Server

    Yedidia, Jonathan S; Draper, Stark C

    2010-01-01

    The "Divide and Concur'' (DC) algorithm, recently introduced by Gravel and Elser, can be considered a competitor to the belief propagation (BP) algorithm, in that both algorithms can be applied to a wide variety of constraint satisfaction, optimization, and probabilistic inference problems. We show that DC can be interpreted as a message-passing algorithm on a constraint graph, which helps make the comparison with BP more clear. The "difference-map'' dynamics of the DC algorithm enables it to avoid "traps'' which may be related to the "trapping sets'' or "pseudo-codewords'' that plague BP decoders of low-density parity check (LDPC) codes in the error-floor regime. We investigate two decoders for low-density parity-check (LDPC) codes based on these ideas. The first decoder is based directly on DC, while the second decoder borrows the important "difference-map'' concept from the DC algorithm and translates it into a BP-like decoder. We show that this "difference-map belief propagation'' (DMBP) decoder has drama...

  8. Real-time minimal bit error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  9. Real-time minimal-bit-error probability decoding of convolutional codes

    Science.gov (United States)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  10. Code Design and Shuffled Iterative Decoding of a Quasi-Cyclic LDPC Coded OFDM System

    Institute of Scientific and Technical Information of China (English)

    LIU Binbin; BAI Dong; GE Qihong; MEI Shunliang

    2009-01-01

    In multipath environments,the error rate performance of orthogonal frequency division multiplexing (OFDM) is severely degraded by the deep fading subcarriers.Powerful error-correcting codes must be used with OFDM.This paper presents a quasi-cyclic low-density parity-check (LDPC) coded OFDM system,in which the redundant bits of each codeword are mapped to a higher-order modulation constellation.The optimal degree distribution was calculated using density evolution.The corresponding quasi-cyclic LDPC code was then constructed using circulant permutation matrices.Group shuffled message passing scheduling was used in the iterative decoding.Simulation results show that the system achieves better error rate performance and faster decoding convergence than conventional approaches on both additive white Gaussian noise (AWGN) and Rayleigh fading channels.

  11. 基于消息传递的并行计算环境:MPI与PVM的比较%TWO KINDS OF BASED MESSAGE-PASSING PARALLEL COMPUTING ENVIRONMENT: PVM AND MPI'S COMPARISON

    Institute of Scientific and Technical Information of China (English)

    邵子立; 宋杰

    2000-01-01

    本文对在分布式计算中广泛应用的二种并行计算环境MPI(Message Passing Interface)和PVM(Parallel Vir-tual Machine)进行了比较.从MPI和PVM的设计思想出发,在可移植性、任务控制和分配、资源管理、容错、安全通信的上下文和多线程、通信方式、名字服务和消息句柄八个方面分析了它们各自的功能特点.

  12. Improved Decoding Algorithm of Non-binary LDPC Codes Based on Logarithm Domain%基于对数域的多进制LDPC码改进译码算法

    Institute of Scientific and Technical Information of China (English)

    贺刚; 柏鹏; 彭卫东; 赵红言; 苏兮; 林晋福; 王明芳; 何苹

    2013-01-01

    为了进一步降低多进制LDPC码译码的复杂度,分析了扩展最小和算法(EMS)存在的不足,提出了一种基于对数域的多进制LDPC码的改进译码算法.该算法一方面根据每次迭代中变量节点的概率分布对的平均方差自适应选择FHT的阶数;另一方面算法中校验节点的更新运算由乘法转化为基于对数域上的加法运算,从而更易于硬件实现.仿真结果表明,与EMS算法相比,该算法性能与收敛速率均有明显改进.%To reduce the complexity of decoding algorithm for non-binary LDPC codes and overcome the drawback of the extended min-sum (EMS) algorithm, an improved EMS decoding algorithm based on logarithm domain is proposed. The algorithm adaptively chooses the rank of FHT in each iteration step according to the average variance of bit nodes' probability pairs. It translates the multiplication calculation for updating parity-check notes into addition calculation based on logarithm domain. It makes the hardware easier to realize. The simulation shows the proposed algorithm can achieve better performance and converge faster than EMS algorithm.

  13. Efficient Decoding of Turbo Codes with Nonbinary Belief Propagation

    Directory of Open Access Journals (Sweden)

    Thierry Lestable

    2008-05-01

    Full Text Available This paper presents a new approach to decode turbo codes using a nonbinary belief propagation decoder. The proposed approach can be decomposed into two main steps. First, a nonbinary Tanner graph representation of the turbo code is derived by clustering the binary parity-check matrix of the turbo code. Then, a group belief propagation decoder runs several iterations on the obtained nonbinary Tanner graph. We show in particular that it is necessary to add a preprocessing step on the parity-check matrix of the turbo code in order to ensure good topological properties of the Tanner graph and then good iterative decoding performance. Finally, by capitalizing on the diversity which comes from the existence of distinct efficient preprocessings, we propose a new decoding strategy, called decoder diversity, that intends to take benefits from the diversity through collaborative decoding schemes.

  14. Parallel LDPC Decoding on GPUs Using a Stream-Based Computing Approach

    Institute of Scientific and Technical Information of China (English)

    Gabriel Falc(a)o; Shinichi Yamagiwa; Vitor Silva; Leonel Sousa

    2009-01-01

    Low-Density Parity-Check(LDPC)codes are powerful error correcting codes adopted by recent communication standards.LDPC decoders are based on belief propagation algorithms,which make use of a Tanner graph and very intensive message-passing computation,and usually require hardware-based dedicated solutions.With the exponential increase of the computational power of commodity graphics processing units(GPUs),new opportunities have arisen to develop general purpose processing on GPUs.This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders.A new stream-based approach is proposed,based on compact data structures to represent the Tanner graph.It is shown that such a challenging application for stream-based computing,because of irregular memory access patterns,memory bandwidth and recursive flow control constraints,can be efficiently implemented on GPUs.The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform,a generic interface tool for managing the kernels'execution regardless of the GPU manufacturer and operating system.Moreover,to relatively assess the obtained results,we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data(SIMD)Extensions.Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously,reducing the processing time by one order of magnitude.

  15. List Decoding Tensor Products and Interleaved Codes

    CERN Document Server

    Gopalan, Parikshit; Raghavendra, Prasad

    2008-01-01

    We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. We show that for {\\em every} code, the ratio of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. We show that for {\\em every} code, its list decoding radius remains unchanged under $m$-wise interleaving for an integer $m$. This generalizes a recent result of Dinur et al \\cite{DGKS}, who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). Using the notion of generalized Hamming weights, we give better list size bounds for {\\em both} tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we reduce the task of boundi...

  16. An adaptable binary entropy coder

    Science.gov (United States)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.

  17. Weak Mutation Testing and Its Transformation for Message Passing Parallel Programs%消息传递并行程序的弱变异测试及其转化∗

    Institute of Scientific and Technical Information of China (English)

    巩敦卫; 陈永伟; 田甜

    2016-01-01

    A parallel program can yield nondeterministic execution, which increases the complexity and the difficulty in program testing. The mutation testing of a message passing parallel program is investigated, and an approach to transforming the weak mutation testing for the program is presented in this study with the purpose of improving the efficiency of the mutation testing. First, the mutation condition statements are built based on the type of statements and the changes resulted from mutating these statements. Then, a new program is formed by inserting all these mutation condition statements into the original program. As a result, the problem of the weak mutation testing of the original program can be transformed into that of covering the branches of the new program, therefore providing advantages of solving the problem of mutation testing by using previous methods of branch coverage. The proposed approach is applied to test eight benchmark message passing parallel programs, and the empirical results demonstrate that this new approach is not only feasible but also necessary.%并行程序执行的不确定性,增加了测试的复杂性和难度。研究消息传递并行程序的变异测试,提出其弱变异测试转化方法,以提高该程序变异测试的效率。首先,根据消息传递并行程序包含语句的类型和语句变异之后导致的变化构建相应的变异条件语句;然后,将构建好的所有变异条件语句插入到原程序中,形成新的被测程序,从而将原程序的弱变异测试问题转化为新程序的分支覆盖问题。这样做的好处是,能够利用已有的分支覆盖方法解决变异测试问题。将该方法应用于8个典型的消息传递并行程序测试中,实验结果表明,该方法不但是可行的,也是必要的。

  18. Message passing with relaxed moment matching

    OpenAIRE

    Qi, Yuan; Guo, Yandong

    2012-01-01

    Bayesian learning is often hampered by large computational expense. As a powerful generalization of popular belief propagation, expectation propagation (EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP can be sensitive to outliers and suffer from divergence for difficult cases. To address this issue, we propose a new approximate inference approach, relaxed expectation propagation (REP). It relaxes the moment matching requirement of expectation propagation by addin...

  19. Thread-Oriented Message-Passing Interface

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    Thread┐OrientedMesage┐PasingInterfaceTongWeiqinZhouQinghuaGuZhikui(SchoolofComputerEngineringandScience)AbstractInthispaperth...

  20. Iterative List Decoding

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    We analyze the relation between iterative decoding and the extended parity check matrix. By considering a modified version of bit flipping, which produces a list of decoded words, we derive several relations between decodable error patterns and the parameters of the code. By developing a tree...... of codewords at minimal distance from the received vector, we also obtain new information about the code....

  1. Non binary LDPC codes over the binary erasure channel: density evolution analysis

    CERN Document Server

    Savin, Valentin

    2008-01-01

    In this paper we present a thorough analysis of non binary LDPC codes over the binary erasure channel. First, the decoding of non binary LDPC codes is investigated. The proposed algorithm performs on-the-fly decoding, i.e. it starts decoding as soon as the first symbols are received, which generalizes the erasure decoding of binary LDPC codes. Next, we evaluate the asymptotical performance of ensembles of non binary LDPC codes, by using the density evolution method. Density evolution equations are derived by taking into consideration both the irregularity of the bipartite graph and the probability distribution of the graph edge labels. Finally, infinite-length performance of some ensembles of non binary LDPC codes for different edge label distributions are shown.

  2. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance is pos...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  3. Analysis and Design Optimization of Communication Process on Message-Passing-Based MPSoC%消息传递型MPSoC通信过程分析及其优化设计

    Institute of Scientific and Technical Information of China (English)

    付方发; 王进祥; 王良; 吴子旭; 喻明艳

    2011-01-01

    通过分析消息传递型MPSoC通信过程,总结出提高通信性能的有效途径——降低一对多消息发送延迟和提高多条消息并发接收效率.从减少数据拷贝延迟的角度提出了基于硬件抽象层广播优化策略,有效地降低了一对多消息发送延迟;针对并发接收瓶颈,充分考虑减少交互次数和提高长消息通信效率,提出一种基于查找表和DMA模式相结合的接收策略.实验结果表明,广播优化策略及接收策略均能明显提高性能;在64×64的矩阵乘法中,采用优化策略整体性能提高接近1.5倍.%By analyzing the message-passing communication process on MPSoC, efficient ways to enhance communication performance are concluded, which are reducing one-to-many message sending latency and improving efficiency of concurrently receiving multiple messages. A broadcast optimization strategy based on the hardware abstraction level to cut down the cost of data copy is proposed, which decreases one-to-many message sending latency efficiently. Fully taking into consideration reducing interaction times and improving the efficiency of long message communication when dealing with the bottleneck of concurrently receiving multiple messages, a novel network interface (NI) receiving strategy that combines a lookup table (LUT) mechanism and DMA mode is presented. Experimental results show that both broadcast optimization strategy and NI receiving strategy can achieve better performance. In a 64X64 matrix multiplication, the overall performance is increased approximately by 50%.

  4. Ternary Tree and Memory-Efficient Huffman Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2011-01-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the encoded symbols and proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method.

  5. Lossy Source Compression of Non-Uniform Binary Sources Using GQ-LDGM Codes

    CERN Document Server

    Cappellari, Lorenzo

    2010-01-01

    In this paper, we study the use of GF(q)-quantized LDGM codes for binary source coding. By employing quantization, it is possible to obtain binary codewords with a non-uniform distribution. The obtained statistics is hence suitable for optimal, direct quantization of non-uniform Bernoulli sources. We employ a message-passing algorithm combined with a decimation procedure in order to perform compression. The experimental results based on GF(q)-LDGM codes with regular degree distributions yield performances quite close to the theoretical rate-distortion bounds.

  6. High Speed Viterbi Decoder Architecture

    DEFF Research Database (Denmark)

    Paaske, Erik; Andersen, Jakob Dahl

    1998-01-01

    The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s.......The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....

  7. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  8. Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses

    Science.gov (United States)

    Huang, Haiping

    2017-05-01

    Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.

  9. Multiplicatively Repeated Non-Binary LDPC Codes

    CERN Document Server

    Kasai, Kenta; Poulliat, Charly; Sakaniwa, Kohichi

    2010-01-01

    We propose non-binary LDPC codes concatenated with multiplicative repetition codes. By multiplicatively repeating the (2,3)-regular non-binary LDPC mother code of rate 1/3, we construct rate-compatible codes of lower rates 1/6, 1/9, 1/12,... Surprisingly, such simple low-rate non-binary LDPC codes outperform the best low-rate binary LDPC codes so far. Moreover, we propose the decoding algorithm for the proposed codes, which can be decoded with almost the same computational complexity as that of the mother code.

  10. Decoding Astronomical Concepts

    Science.gov (United States)

    Durisen, Richard H.; Pilachowski, Catherine A.

    2004-01-01

    Two astronomy professors, using the Decoding the Disciplines process, help their students use abstract theories to analyze light and to visualize the enormous scale of astronomical concepts. (Contains 5 figures.)

  11. Optimization of MPEG decoding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1999-01-01

    MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing.......MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing....

  12. HIGH THROUGHPUT OF MAP PROCESSOR USING PIPELINE WINDOW DECODING

    Directory of Open Access Journals (Sweden)

    P. Nithya

    2012-11-01

    Full Text Available Turbo codes are one of the most efficient error correcting code which approaches the Shannon limit.The high throughput in turbo decoder can be achieved by parallelizing several soft Input Soft Output(SISOunits together.In this way,multiple SISO decoders work on the same data frame at the same values and delievers soft outputs can be split into three terms like the soft channel and a priori input and the extrinsic value.The extrinsic value is used for the next iteration.The high throughput of Max-Log-MAP processor tha supports both single Binary(SBand Double-binary(DB convolutional turbo codes.Decoding of these codes however an iterative processing is requires high computation rate and latency.Thus in order to achieve high throughput and to reduce latency by using serial processing techniques.The pipeline window(PWdecoding is introduced to support arbitrary frame sizes with high throughput.

  13. Analysis and design of raptor codes for joint decoding using Information Content evolution

    CERN Document Server

    Venkiah, Auguste; Declercq, David

    2007-01-01

    In this paper, we present an analytical analysis of the convergence of raptor codes under joint decoding over the binary input additive white noise channel (BIAWGNC), and derive an optimization method. We use Information Content evolution under Gaussian approximation, and focus on a new decoding scheme that proves to be more efficient: the joint decoding of the two code components of the raptor code. In our general model, the classical tandem decoding scheme appears to be a subcase, and thus, the design of LT codes is also possible.

  14. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... give: a fast maximum-likelihood list decoder based on the Guruswami–Sudan algorithm; a new variant of Power decoding, Power Gao, along with some new insights into Power decoding; and a new, module based method for performing rational interpolation for theWu algorithm. We also show how to decode...

  15. Serial Min-max Decoding Algorithm Based on Variable Weighting for Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Zhongxun Wang

    2013-09-01

    Full Text Available In this paper, we perform an analysis on the min-max decoding algorithm for nonbinary LDPC(low-density parity-check codes and propose serial min-max decoding algorithm. Combining with the weighted processing of the variable node message, we propose serial min-max decoding algorithm based on variable weighting for nonbinary LDPC codes in the end. The simulation indicates that when the bit error rate is 10^-3,compared with serial min-max decoding algorithm ,traditional min-max decoding algorithm and traditional minsum algorithm ,serial min-max decoding algorithm based on variable weighting can offer additional coding gain 0.2dB、0.8dB and 1.4dB respectively in additional white Gaussian noise channel and under binary phase shift keying modulation.  

  16. Interpretability in Linear Brain Decoding

    OpenAIRE

    Kia, Seyed Mostafa; Passerini, Andrea

    2016-01-01

    Improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of brain decoding models. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, we present a simple definition for interpretability of linear brain decoding models. Then, we propose to combine the...

  17. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  18. Decoding Children's Expressions of Affect.

    Science.gov (United States)

    Feinman, Joel A.; Feldman, Robert S.

    Mothers' ability to decode the emotional expressions of their male and female children was compared to the decoding ability of non-mothers. Happiness, sadness, fear and anger were induced in children in situations that varied in terms of spontaneous and role-played encoding modes. It was hypothesized that mothers would be more accurate decoders of…

  19. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  20. Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits

    Science.gov (United States)

    Haghparast, Majid; Monfared, Asma Taheri

    2017-05-01

    Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.

  1. Novel Quaternary Quantum Decoder, Multiplexer and Demultiplexer Circuits

    Science.gov (United States)

    Haghparast, Majid; Monfared, Asma Taheri

    2017-02-01

    Multiple valued logic is a promising approach to reduce the width of the reversible or quantum circuits, moreover, quaternary logic is considered as being a good choice for future quantum computing technology hence it is very suitable for the encoded realization of binary logic functions through its grouping of 2-bits together into quaternary values. The Quaternary decoder, multiplexer, and demultiplexer are essential units of quaternary digital systems. In this paper, we have initially designed a quantum realization of the quaternary decoder circuit using quaternary 1-qudit gates and quaternary Muthukrishnan-Stroud gates. Then we have presented quantum realization of quaternary multiplexer and demultiplexer circuits using the constructed quaternary decoder circuit and quaternary controlled Feynman gates. The suggested circuits in this paper have a lower quantum cost and hardware complexity than the existing designs that are currently used in quaternary digital systems. All the scales applied in this paper are based on Nanometric area.

  2. Continuous motion decoding from EMG using independent component analysis and adaptive model training.

    Science.gov (United States)

    Zhang, Qin; Xiong, Caihua; Chen, Wenbin

    2014-01-01

    Surface Electromyography (EMG) is popularly used to decode human motion intention for robot movement control. Traditional motion decoding method uses pattern recognition to provide binary control command which can only move the robot as predefined limited patterns. In this work, we proposed a motion decoding method which can accurately estimate 3-dimensional (3-D) continuous upper limb motion only from multi-channel EMG signals. In order to prevent the muscle activities from motion artifacts and muscle crosstalk which especially obviously exist in upper limb motion, the independent component analysis (ICA) was applied to extract the independent source EMG signals. The motion data was also transferred from 4-manifold to 2-manifold by the principle component analysis (PCA). A hidden Markov model (HMM) was proposed to decode the motion from the EMG signals after the model trained by an adaptive model identification process. Experimental data were used to train the decoding model and validate the motion decoding performance. By comparing the decoded motion with the measured motion, it is found that the proposed motion decoding strategy was feasible to decode 3-D continuous motion from EMG signals.

  3. Combinatorial limitations of a strong form of list decoding

    CERN Document Server

    Guruswami, Venkatesan

    2012-01-01

    We prove the following results concerning the combinatorics of list decoding, motivated by the exponential gap between the known upper bound (of $O(1/\\gamma)$) and lower bound (of $\\Omega_p(\\log (1/\\gamma))$) for the list-size needed to decode up to radius $p$ with rate $\\gamma$ away from capacity, i.e., $1-\\h(p)-\\gamma$ (here $p\\in (0,1/2)$ and $\\gamma > 0$). We prove that in any binary code $C \\subseteq \\{0,1\\}^n$ of rate $1-\\h(p)-\\gamma$, there must exist a set $\\mathcal{L} \\subset C$ of $\\Omega_p(1/\\sqrt{\\gamma})$ codewords such that the average distance of the points in $\\mathcal{L}$ from their centroid is at most $pn$. In other words, there must exist $\\Omega_p(1/\\sqrt{\\gamma})$ codewords with low "average radius". The motivation for this result is that it gives a list-size lower bound for a strong notion of list decoding which has been implicitly been used in the previous negative results for list decoding. (The usual notion of list decoding corresponds to replacing {\\em average} radius by the {\\em min...

  4. GENETIC ALGORITHM FOR DECODING LINEAR CODES OVER AWGN AND FADING CHANNELS

    Directory of Open Access Journals (Sweden)

    H. BERBIA

    2011-08-01

    Full Text Available This paper introduces a decoder for binary linear codes based on Genetic Algorithm (GA over the Gaussian and Rayleigh flat fading channel. The performances and compututional complexity of our decoder applied to BCH and convolutional codes are good compared to Chase-2 and Viterbi algorithm respectively. It show that our algorithm is less complex for linear block codes of large block length; furthermore it's performances can be improved by tuning the decoder's parameters, in particular the number of individuals by population and the number of generations

  5. Fountain Codes with Multiplicatively Repeated Non-Binary LDPC Codes

    CERN Document Server

    Kasai, Kenta

    2010-01-01

    We study fountain codes transmitted over the binary-input symmetric-output channel. For channels with small capacity, receivers needs to collects many channel outputs to recover information bits. Since a collected channel output yields a check node in the decoding Tanner graph, the channel with small capacity leads to large decoding complexity. In this paper, we introduce a novel fountain coding scheme with non-binary LDPC codes. The decoding complexity of the proposed fountain code does not depend on the channel. Numerical experiments show that the proposed codes exhibit better performance than conventional fountain codes, especially for small number of information bits.

  6. Spatially-Coupled Binary MacKay-Neal Codes for Channels with Non-Binary Inputs and Affine Subspace Outputs

    CERN Document Server

    Kasai, Kenta; Sakaniwa, Kohichi

    2012-01-01

    We study LDPC codes for the channel with $2^m$-ary input $\\underline{x}\\in \\GF(2)^m$ and output $\\underline{y}=\\underline{x}+\\underline{z}\\in \\GF(2)^m$. The receiver knows a subspace $V\\subset \\GF(2)^m$ from which $\\underline{z}=\\underline{y}-\\underline{x}$ is uniformly chosen. Or equivalently, the receiver receives an affine subspace $\\underline{y}-V$ where $\\underline{x}$ lies. We consider a joint iterative decoder involving the channel detector and the LDPC decoder. The decoding system considered in this paper can be viewed as a simplified model of the joint iterative decoder over non-binary modulated signal inputs e.g., $2^m$-QAM. We evaluate the performance of binary spatially-coupled MacKay-Neal code by density evolution. EXIT-like function curve calculations reveal that iterative decoding threshold values are very close to the Shannon limit.

  7. Loneliness and Interpersonal Decoding Skills.

    Science.gov (United States)

    Zakahi, Walter R.; Goss, Blaine

    1995-01-01

    Finds that the romantic loneliness dimension of the Differential Loneliness Scale is related to decoding ability, and that there are moderate linear relationships among several of the dimensions of the Differential Loneliness Scale, the self-report of listening ability, and participants' view of their own decoding ability. (SR)

  8. Decoding suprathreshold stochastic resonance with optimal weights

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Liyan, E-mail: xuliyan@qdu.edu.cn [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Vladusich, Tony [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Duan, Fabing [Institute of Complexity Science, Qingdao University, Qingdao 266071 (China); Gunn, Lachlan J.; Abbott, Derek [Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia); McDonnell, Mark D. [Computational and Theoretical Neuroscience Laboratory, Institute for Telecommunications Research, School of Information Technology and Mathematical Sciences, University of South Australia, SA 5095 (Australia); Centre for Biomedical Engineering (CBME) and School of Electrical & Electronic Engineering, The University of Adelaide, Adelaide, SA 5005 (Australia)

    2015-10-09

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible.

  9. Sparsity-Aware Sphere Decoding: Algorithms and Complexity Analysis

    Science.gov (United States)

    Barik, Somsubhra; Vikalo, Haris

    2014-05-01

    Integer least-squares problems, concerned with solving a system of equations where the components of the unknown vector are integer-valued, arise in a wide range of applications. In many scenarios the unknown vector is sparse, i.e., a large fraction of its entries are zero. Examples include applications in wireless communications, digital fingerprinting, and array-comparative genomic hybridization systems. Sphere decoding, commonly used for solving integer least-squares problems, can utilize the knowledge about sparsity of the unknown vector to perform computationally efficient search for the solution. In this paper, we formulate and analyze the sparsity-aware sphere decoding algorithm that imposes $\\ell_0$-norm constraint on the admissible solution. Analytical expressions for the expected complexity of the algorithm for alphabets typical of sparse channel estimation and source allocation applications are derived and validated through extensive simulations. The results demonstrate superior performance and speed of sparsity-aware sphere decoder compared to the conventional sparsity-unaware sphere decoding algorithm. Moreover, variance of the complexity of the sparsity-aware sphere decoding algorithm for binary alphabets is derived. The search space of the proposed algorithm can be further reduced by imposing lower bounds on the value of the objective function. The algorithm is modified to allow for such a lower bounding technique and simulations illustrating efficacy of the method are presented. Performance of the algorithm is demonstrated in an application to sparse channel estimation, where it is shown that sparsity-aware sphere decoder performs close to theoretical lower limits.

  10. The Formal Specifications for Protocols of Decoders

    Institute of Scientific and Technical Information of China (English)

    YUAN Meng-ting; WU Guo-qing; SHU Feng-di

    2004-01-01

    This paper presents a formal approach, FSPD (Formal Specifications for Protocols of Decoders), to specify decoder communication protocols. Based on axiomatic, FSPD is a precise language with which programmers could use only one suitable driver to handle various types of decoders. FSPD is helpful for programmers to get high adaptability and reusability of decoder-driver software.

  11. Practical Binary Adaptive Block Coder

    CERN Document Server

    Reznik, Yuriy A

    2007-01-01

    This paper describes design of a low-complexity algorithm for adaptive encoding/ decoding of binary sequences produced by memoryless sources. The algorithm implements universal block codes constructed for a set of contexts identified by the numbers of non-zero bits in previous bits in a sequence. We derive a precise formula for asymptotic redundancy of such codes, which refines previous well-known estimate by Krichevsky and Trofimov, and provide experimental verification of this result. In our experimental study we also compare our implementation with existing binary adaptive encoders, such as JBIG's Q-coder, and MPEG AVC (ITU-T H.264)'s CABAC algorithms.

  12. Astrophysics Decoding the cosmos

    CERN Document Server

    Irwin, Judith A

    2007-01-01

    Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.

  13. Decoding the productivity code

    DEFF Research Database (Denmark)

    Hansen, David

    .e., to be prepared to initiate improvement. The study shows how the effectiveness of the improvement system depends on the congruent fit between the five elements as well as the bridging coherence between the improvement system and the work system. The bridging coherence depends on how improvements are activated...... approach often ends up with demanding intense employee focus to sustain improvement and engagement. Likewise, a single-minded employee development approach often ends up demanding rationalization to achieve the desired financial results. These ineffective approaches make organizations react like pendulums...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period...

  14. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  15. Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    CERN Document Server

    Ling, Cong; Luzzi, Laura; Stehle, Damien

    2011-01-01

    In lattice-coded multiple-input multiple-output (MIMO) systems, optimal decoding amounts to solving the closest vector problem (CVP). Embedding is a powerful technique for the approximate CVP, yet its remarkable performance is not well understood. In this paper, we analyze the embedding technique from a bounded distance decoding (BDD) viewpoint. We prove that the Lenstra, Lenstra and Lov\\'az (LLL) algorithm can achieve 1/(2{\\gamma}) -BDD for {\\gamma} \\approx O(2^(n/4)), yielding a polynomial-complexity decoding algorithm performing exponentially better than Babai's which achieves {\\gamma} = O(2^(n/2)). This substantially improves the existing result {\\gamma} = O(2^(n)) for embedding decoding. We also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT).

  16. Decoding OvTDM with sphere-decoding algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Overlapped time division multiplexing (OvTDM) is a new type of transmission scheme with high spectrum efficiency and low threshold signal-to-noise ratio (SNR). In this article, the structure of OvTDM is introduced and the sphere-decoding algorithm of complex domain is proposed for OvTDM. Simulations demonstrate that the proposed algorithm can achieve maximum likelihood (ML) decoding with lower complexity as compared to traditional maximum likelihood sequence demodulation (MLSD) or viterbi algorithm (VA).

  17. High-speed architecture for the decoding of trellis-coded modulation

    Science.gov (United States)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  18. Decode the Sodium Label Lingo

    Science.gov (United States)

    ... For Preschooler For Gradeschooler For Teen Decode the Sodium Label Lingo Published January 24, 2013 Print Email Reading food labels can help you slash sodium. Here's how to decipher them. "Sodium free" or " ...

  19. Enhancing the Error Correction of Finite Alphabet Iterative Decoders via Adaptive Decimation

    CERN Document Server

    Planjery, Shiva Kumar; Declercq, David

    2012-01-01

    Finite alphabet iterative decoders (FAIDs) for LDPC codes were recently shown to be capable of surpassing the Belief Propagation (BP) decoder in the error floor region on the Binary Symmetric channel (BSC). More recently, the technique of decimation which involves fixing the values of certain bits during decoding, was proposed for FAIDs in order to make them more amenable to analysis while maintaining their good performance. In this paper, we show how decimation can be used adaptively to further enhance the guaranteed error correction capability of FAIDs that are already good on a given code. The new adaptive decimation scheme proposed has marginally added complexity but can significantly improve the slope of the error floor performance of a particular FAID. We describe the adaptive decimation scheme particularly for 7-level FAIDs which propagate only 3-bit messages and provide numerical results for column-weight three codes. Analysis suggests that the failures of the new decoders are linked to stopping sets ...

  20. A New Decoding Scheme for Errorless Codes for Overloaded CDMA with Active User Detection

    CERN Document Server

    Mousavi, Ali; Marvasti, Farokh

    2010-01-01

    Recently, a new class of binary codes for overloaded CDMA systems are proposed that not only has the ability of errorless communication but also suitable for detecting active users. These codes are called COWDA [1]. In [1], a Maximum Likelihood (ML) decoder is proposed for this class of codes. Although the proposed scheme of coding/decoding show impressive performance, the decoder can be improved. In this paper by assuming more practical conditions for the traffic in the system, we suggest an algorithm that increases the performance of the decoder several orders of magnitude (the Bit-Error-Rate (BER) is divided by a factor of 400 in some Eb/N0's The algorithm supposes the Poison distribution for the time of activation/deactivation of the users.

  1. Low-Power Maximum a Posteriori (MAP Algorithm for WiMAX Convolutional Turbo Decoder

    Directory of Open Access Journals (Sweden)

    Chitralekha Ngangbam

    2013-05-01

    Full Text Available We propose to design a Low-Power Memory-Reduced Traceback MAP iterative decoding of convolutional turbo code (CTC which has large data access with large memories consumption and verify the functionality by using simulation tool. The traceback maximum a posteriori algorithm (MAP decoding provides the best performance in terms of bit error rate (BER and reduce the power consumption of the state metric cache (SMC without losing the correction performance. The computation and accessing of different metrics reduce the size of the SMC with no requires complicated reversion checker, path selection, and reversion flag cache. Radix-2*2 and radix-4 traceback structures provide a tradeoff between power consumption and operating frequency for double-binary (DB MAP decoding. These two traceback structures achieve an around 25% power reduction of the SMC, and around 12% power reduction of the DB MAP decoders for WiMAX standard

  2. Propositional Dynamic Logic for Message-Passing Systems

    CERN Document Server

    Bollig, Benedikt; Meinecke, Ingmar

    2010-01-01

    We examine a bidirectional propositional dynamic logic (PDL) for finite and infinite message sequence charts (MSCs) extending LTL and TLC$^{-}$. By this kind of multi-modal logic we can express properties both in the entire future and in the past of an event. Path expressions strengthen the classical until operator of temporal logic. For every formula defining an MSC language, we construct a communicating finite-state machine (CFM) accepting the same language. The CFM obtained has size exponential in the size of the formula. This synthesis problem is solved in full generality, i.e., also for MSCs with unbounded channels. The model checking problem for CFMs and HMSCs turns out to be in PSPACE for existentially bounded MSCs. Finally, we show that, for PDL with intersection, the semantics of a formula cannot be captured by a CFM anymore.

  3. Increasing fault resiliency in a message-passing environment.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Kordenbrock, Todd (Hewlett-Packard Company); Brightwell, Ronald Brian

    2009-10-01

    Petaflops systems will have tens to hundreds of thousands of compute nodes which increases the likelihood of faults. Applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 30,000 nodes will likely spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost. We created a library that performs redundant computations on additional nodes allocated to the application. An active node and its redundant partner form a node bundle which will only fail, and cause an application restart, when both nodes in the bundle fail. The goal of this library is to learn whether this can be done entirely at the user level, what requirements this library places on a Reliability, Availability, and Serviceability (RAS) system, and what its impact on performance and run time is. We find that our redundant MPI layer library imposes a relatively modest performance penalty for applications, but that it greatly reduces the number of applications interrupts. This reduction in interrupts leads to huge savings in restart and rework time. For large-scale applications the savings compensate for the performance loss and the additional nodes required for redundant computations.

  4. Protocol-Based Verification of Message-Passing Parallel Programs

    DEFF Research Database (Denmark)

    López-Acosta, Hugo-Andrés; Eduardo R. B. Marques, Eduardo R. B.; Martins, Francisco

    2015-01-01

    translated into a representation read by VCC, a software verifier for C. We successfully verified several MPI programs in a running time that is independent of the number of processes or other input parameters. This contrasts with alternative techniques, notably model checking and runtime verification...

  5. Optimizing spread dynamics on graphs by message passing

    Science.gov (United States)

    Altarelli, F.; Braunstein, A.; Dall'Asta, L.; Zecchina, R.

    2013-09-01

    Cascade processes are responsible for many important phenomena in natural and social sciences. Simple models of irreversible dynamics on graphs, in which nodes activate depending on the state of their neighbors, have been successfully applied to describe cascades in a large variety of contexts. Over the past decades, much effort has been devoted to understanding the typical behavior of the cascades arising from initial conditions extracted at random from some given ensemble. However, the problem of optimizing the trajectory of the system, i.e. of identifying appropriate initial conditions to maximize (or minimize) the final number of active nodes, is still considered to be practically intractable, with the only exception being models that satisfy a sort of diminishing returns property called submodularity. Submodular models can be approximately solved by means of greedy strategies, but by definition they lack cooperative characteristics which are fundamental in many real systems. Here we introduce an efficient algorithm based on statistical physics for the optimization of trajectories in cascade processes on graphs. We show that for a wide class of irreversible dynamics, even in the absence of submodularity, the spread optimization problem can be solved efficiently on large networks. Analytic and algorithmic results on random graphs are complemented by the solution of the spread maximization problem on a real-world network (the Epinions consumer reviews network).

  6. An Adaptive Message Passing MPSoC Framework

    Directory of Open Access Journals (Sweden)

    Gabriel Marchesan Almeida

    2009-01-01

    Full Text Available Multiprocessor Systems-on-Chips (MPSoCs offer superior performance while maintaining flexibility and reusability thanks to software oriented personalization. While most MPSoCs are today heterogeneous for better meeting the targeted application requirements, homogeneous MPSoCs may become in a near future a viable alternative bringing other benefits such as run-time load balancing and task migration. The work presented in this paper relies on a homogeneous NoC-based MPSoC framework we developed for exploring scalable and adaptive on-line continuous mapping techniques. Each processor of this system is compact and runs a tiny preemptive operating system that monitors various metrics and is entitled to take remapping decisions through code migration techniques. This approach that endows the architecture with decisional capabilities permits refining application implementation at run-time according to various criteria. Experiments based on simple policies are presented on various applications that demonstrate the benefits of such an approach.

  7. Message passing with a limited number of DMA byte counters

    Energy Technology Data Exchange (ETDEWEB)

    Blocksome, Michael (Rochester, MN); Chen, Dong (Croton on Hudson, NY); Giampapa, Mark E. (Irvington, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kumar, Sameer (White Plains, NY); Parker, Jeffrey J. (Rochester, MN)

    2011-10-04

    A method for passing messages in a parallel computer system constructed as a plurality of compute nodes interconnected as a network where each compute node includes a DMA engine but includes only a limited number of byte counters for tracking a number of bytes that are sent or received by the DMA engine, where the byte counters may be used in shared counter or exclusive counter modes of operation. The method includes using rendezvous protocol, a source compute node deterministically sending a request to send (RTS) message with a single RTS descriptor using an exclusive injection counter to track both the RTS message and message data to be sent in association with the RTS message, to a destination compute node such that the RTS descriptor indicates to the destination compute node that the message data will be adaptively routed to the destination node. Using one DMA FIFO at the source compute node, the RTS descriptors are maintained for rendezvous messages destined for the destination compute node to ensure proper message data ordering thereat. Using a reception counter at a DMA engine, the destination compute node tracks reception of the RTS and associated message data and sends a clear to send (CTS) message to the source node in a rendezvous protocol form of a remote get to accept the RTS message and message data and processing the remote get (CTS) by the source compute node DMA engine to provide the message data to be sent.

  8. Characterizing Computation-Communication Overlap in Message-Passing Systems

    Energy Technology Data Exchange (ETDEWEB)

    David E. Bernholdt; Jarek Nieplocha; P. Sadayappan; Aniruddha G. Shet; Vinod Tipparaju

    2008-01-31

    Effective overlap of computation and communication is a well understood technique for latency hiding and can yield significant performance gains for applications on high-end computers. In this report, we describe an instrumentation framework developed for messagepassing systems to characterize the degree of overlap of communication with computation in the execution of parallel applications. The inability to obtain precise time-stamps for pertinent communication events is a significant problem, and is addressed by generation of minimum and maximum bounds on achieved overlap. The overlap measures can aid application developers and system designers in investigating scalability issues. The approach has been used to instrument two MPI implementations as well as the ARMCI system. The implementation resides entirely within the communication library and thus integrates well with existing approaches that operate outside the library. The utility of the framework is demonstrated by analyzing communication-computation overlap for micro-benchmarks and the NAS benchmarks, and the insights obtained are used to modify the NAS SP benchmark, resulting in improved overlap.

  9. Parallel Ray Tracing Using the Message Passing Interface

    Science.gov (United States)

    2007-09-01

    efficiency of 97.9% and a normalized ray-tracing rate of 6.95 ?106 rays ? surfaces/(s ? processor) in a system with 22 planar surfaces, two paraboloid ...with 22 planar surfaces, two paraboloid reflectors, and one hyperboloid refractor. The need for a load-balancing software was obviated by the use of a...specified for each type of optical surface—planar, spherical, paraboloid , hyperboloid, aspheric—and whether it applies for reflection or refraction. The

  10. Sharing Memory Robustly in Message-Passing Systems

    Science.gov (United States)

    1990-02-16

    ust: a very restricted form of communication. Chor and Moscovici ([20]) present a hierarchy of resiliency for problems in shared-memory systems and...1985. [20] B. Chor, and L. Moscovici , Solvability in Asynchronous Environments, Proc. 30th Syrup. on Foun- * dations of Comp. Sc~en ce, pp. 422-427, 1989

  11. S-AMP: Approximate Message Passing for General Matrix Ensembles

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2014-01-01

    We propose a novel iterative estimation algorithm for linear observation models called S-AMP. The fixed points of S-AMP are the stationary points of the exact Gibbs free energy under a set of (first- and second-) moment consistency constraints in the large system limit. S-AMP extends...

  12. Track-stitching using graphical models and message passing

    CSIR Research Space (South Africa)

    Van der Merwe, LJ

    2013-07-01

    Full Text Available . Multiple crossing targets, with fragmented tracks, are simulated. It is then shown, that the algorithm successfully stitches track fragments together, even in the presence of false tracks, caused by noisy observations....

  13. NetDecoder: a network biology platform that decodes context-specific biological networks and gene activities.

    Science.gov (United States)

    da Rocha, Edroaldo Lummertz; Ung, Choong Yong; McGehee, Cordelia D; Correia, Cristina; Li, Hu

    2016-06-02

    The sequential chain of interactions altering the binary state of a biomolecule represents the 'information flow' within a cellular network that determines phenotypic properties. Given the lack of computational tools to dissect context-dependent networks and gene activities, we developed NetDecoder, a network biology platform that models context-dependent information flows using pairwise phenotypic comparative analyses of protein-protein interactions. Using breast cancer, dyslipidemia and Alzheimer's disease as case studies, we demonstrate NetDecoder dissects subnetworks to identify key players significantly impacting cell behaviour specific to a given disease context. We further show genes residing in disease-specific subnetworks are enriched in disease-related signalling pathways and information flow profiles, which drive the resulting disease phenotypes. We also devise a novel scoring scheme to quantify key genes-network routers, which influence many genes, key targets, which are influenced by many genes, and high impact genes, which experience a significant change in regulation. We show the robustness of our results against parameter changes. Our network biology platform includes freely available source code (http://www.NetDecoder.org) for researchers to explore genome-wide context-dependent information flow profiles and key genes, given a set of genes of particular interest and transcriptome data. More importantly, NetDecoder will enable researchers to uncover context-dependent drug targets.

  14. Pipelined Viterbi Decoder Using FPGA

    Directory of Open Access Journals (Sweden)

    Nayel Al-Zubi

    2013-02-01

    Full Text Available Convolutional encoding is used in almost all digital communication systems to get better gain in BER (Bit Error Rate, and all applications needs high throughput rate. The Viterbi algorithm is the solution in decoding process. The nonlinear and feedback nature of the Viterbi decoder makes its high speed implementation harder. One of promising approaches to get high throughput in the Viterbi decoder is to introduce a pipelining. This work applies a carry-save technique, which gets the advantage that the critical path in the ACS feedback becomes in one direction and get rid of carry ripple in the “Add” part of ACS unit. In this simulation and implementation show how this technique will improve the throughput of the Viterbi decoder. The design complexities for the bit-pipelined architecture are evaluated and demonstrated using Verilog HDL simulation. And a general algorithm in software that simulates a Viterbi Decoder was developed. Our research is concerned with implementation of the Viterbi Decoders for Field Programmable Gate Arrays (FPGA. Generally FPGA's are slower than custom integrated circuits but can be configured in the lab in few hours as compared to fabrication which takes few months. The design implemented using Verilog HDL and synthesized for Xilinx FPGA's.

  15. Soft decoding a self-dual (48, 24; 12) code

    Science.gov (United States)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  16. Two-Bit Bit Flipping Decoding of LDPC Codes

    CERN Document Server

    Nguyen, Dung Viet; Marcellin, Michael W

    2011-01-01

    In this paper, we propose a new class of bit flipping algorithms for low-density parity-check (LDPC) codes over the binary symmetric channel (BSC). Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit increases the guaranteed error correction capability by a factor of at least 2. An additional bit can also be employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis of the proposed algorithms is described. These algorithms outperform the Gallager A/B algorithm and the min-sum algorithm at much lower complexity. Concatenation of two-bit bit flipping algorithms show a potential to approach the performance of belief propagation (BP) decoding in the error floor region, also at lower complexity.

  17. On encoding symbol degrees of array BP-XOR codes

    OpenAIRE

    Paterson, Maura B.; Stinson, D.R.; Wang, Y.

    2016-01-01

    Low density parity check (LDPC) codes, LT codes and digital fountain techniques have received significant attention from both academics and industry in the past few years. By employing the underlying ideas of efficient Belief Propagation (BP) decoding process (also called iterative message passing decoding process) on binary erasure channels (BEC) in LDPC codes, Wang has recently introduced the concept of array BP-XOR codes and showed the necessary and sufficient conditions for MDS [k + 2,k] ...

  18. Simple Low-Rate Non-Binary LDPC Coding for Relay Channels

    CERN Document Server

    Suthisopapan, Puripong; Meesomboon, Anupap; Imtawil, Virasit; Sakaniwa, Kohichi

    2011-01-01

    Binary LDPC coded relay systems have been well studied previously with the assumption of infinite codeword length. In this paper, we deal with non-binary LDPC codes which can outperform their binary counterpart especially for practical codeword length. We utilize non-binary LDPC codes and recently invented non-binary coding techniques known as multiplicative repetition to design the low-rate coding strategy for the decode-and-forward half-duplex relay channel. We claim that the proposed strategy is simple since the destination and the relay can decode with almost the same computational complexity by sharing the same structure of decoder. Numerical experiments are carried out to show that the performances obtained by non-binary LDPC coded relay systems surpass the capacity of direct transmission and also approach within less than 1.5 dB from the achievable rate of the relay channels.

  19. Orientation decoding: Sense in spirals?

    Science.gov (United States)

    Clifford, Colin W G; Mannion, Damien J

    2015-04-15

    The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson's (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain's processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense.

  20. Decoding Dyslexia, a Common Learning Disability

    Science.gov (United States)

    ... of this page please turn JavaScript on. Feature: Dyslexia Decoding Dyslexia, a Common Learning Disability Past Issues / Winter 2016 ... Dyslexic" Articles In Their Own Words: Dealing with Dyslexia / Decoding Dyslexia, a Common Learning Disability / What is ...

  1. Decoding, Semantic Processing, and Reading Comprehension Skill

    Science.gov (United States)

    Golinkoff, Roberta Michnick; Rosinski, Richard R.

    1976-01-01

    A set of decoding tests and picture-word interference tasks was administered to third and fifth graders to explore the relationship between single-word decoding, single-word semantic processing, and text comprehension skill. (BRT)

  2. Modular VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1991-01-01

    Proposed Reed-Solomon decoder contains multiple very-large-scale integrated (VLSI) circuit chips of same type. Each chip contains sets of logic cells and subcells performing functions from all stages of decoding process. Full decoder assembled by concatenating chips, with selective utilization of cells in particular chips. Cost of development reduced by factor of 5. In addition, decoder programmable in field and switched between 8-bit and 10-bit symbol sizes.

  3. Modular VLSI Reed-Solomon Decoder

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie

    1991-01-01

    Proposed Reed-Solomon decoder contains multiple very-large-scale integrated (VLSI) circuit chips of same type. Each chip contains sets of logic cells and subcells performing functions from all stages of decoding process. Full decoder assembled by concatenating chips, with selective utilization of cells in particular chips. Cost of development reduced by factor of 5. In addition, decoder programmable in field and switched between 8-bit and 10-bit symbol sizes.

  4. Interference Decoding for Deterministic Channels

    CERN Document Server

    Bandemer, Bernd

    2010-01-01

    An inner bound to the capacity region of a class of three user pair deterministic interference channels is presented. The key idea is to simultaneously decode the combined interference signal and the intended message at each receiver. It is shown that this interference decoding inner bound is strictly larger than the inner bound obtained by treating interference as noise, which includes interference alignment for deterministic channels. The gain comes from judicious analysis of the number of combined interference sequences in different regimes of input distributions and message rates.

  5. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...

  6. Soft-decision decoding of RS codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of RS codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when two or more...

  7. TURBO DECODER USING LOCAL SUBSIDIARY MAXIMUM LIKELIHOOD DECODING IN PRIOR ESTIMATION OF THE EXTRINSIC INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Yang Fengfan

    2004-01-01

    A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.

  8. Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    Science.gov (United States)

    Guionnet, Thomas; Guillemot, Christine

    2004-12-01

    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy ( excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and[InlineEquation not available: see fulltext.]-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

  9. Behavioral approach to list decoding

    NARCIS (Netherlands)

    Polderman, Jan Willem; Kuijper, Margreta

    2002-01-01

    List decoding may be translated into a bivariate interpolation problem. The interpolation problem is to find a bivariate polynomial of minimal weighted degree that interpolates a given set of pairs taken from a finite field. We present a behavioral approach to this interpolation problem. With the da

  10. Decoding intention at sensorimotor timescales.

    Directory of Open Access Journals (Sweden)

    Mathew Salvaris

    Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.

  11. Decoding the TV Remote Control.

    Science.gov (United States)

    O'Connell, James

    2000-01-01

    Describes how to observe the pulse structure of the infrared signals from the light-emitting diode in a TV remote control. This exercise in decoding infrared digital signals provides an opportunity to discuss semiconductors, photonics technology, cryptology, and the physics of how things work. (WRM)

  12. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...

  13. On Decoding Interleaved Chinese Remainder Codes

    DEFF Research Database (Denmark)

    Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail...... with a probability dependent on the number of errors, and we give an upper bound for this. Simulation results indicate that the bound is close to the truth. We apply the proposed decoding algorithm for decoding a single CR code using the idea of “Power” decoding, suggested for Reed-Solomon codes. A combination...... of these two methods can be used to decode low-rate Interleaved Chinese Remainder codes....

  14. Evaluate the Word Error Rate of Binary Block Codes with Square Radius Probability Density Function

    CERN Document Server

    Chen, Xiaogang; Gu, Jian; Yang, Hongkui

    2007-01-01

    The word error rate (WER) of soft-decision-decoded binary block codes rarely has closed-form. Bounding techniques are widely used to evaluate the performance of maximum-likelihood decoding algorithm. But the existing bounds are not tight enough especially for low signal-to-noise ratios and become looser when a suboptimum decoding algorithm is used. This paper proposes a new concept named square radius probability density function (SR-PDF) of decision region to evaluate the WER. Based on the SR-PDF, The WER of binary block codes can be calculated precisely for ML and suboptimum decoders. Furthermore, for a long binary block code, SR-PDF can be approximated by Gamma distribution with only two parameters that can be measured easily. Using this property, two closed-form approximative expressions are proposed which are very close to the simulation results of the WER of interesting.

  15. Reducing the memory for iteration-exchanged information and border future metrics in the HomePlug AV turbo decoder implementation

    OpenAIRE

    Masera, Guido; Martina, Maurizio

    2012-01-01

    HomePlug AV is the most successful standard for in home power line communications. To combat non-ideality of the power line channel it includes a double binary turbo forward error correcting scheme. Unfortunately, it is known that the memory required by double binary turbo decoders for iteration-exchanged information is roughly three times the memory required for binary turbo codes. Moreover, high throughput implementations based on border state metric inheritance, require additional memories...

  16. Analysis of Quasi-Cyclic LDPC codes under ML decoding over the erasure channel

    CERN Document Server

    Cunche, Mathieu; Roca, Vincent

    2010-01-01

    In this paper, we show that Quasi-Cyclic LDPC codes can efficiently accommodate the hybrid iterative/ML decoding over the binary erasure channel. We demonstrate that the quasi-cyclic structure of the parity-check matrix can be advantageously used in order to significantly reduce the complexity of the ML decoding. This is achieved by a simple row/column permutation that transforms a QC matrix into a pseudo-band form. Based on this approach, we propose a class of QC-LDPC codes with almost ideal error correction performance under the ML decoding, while the required number of row/symbol operations scales as $k\\sqrt{k}$, where $k$ is the number of source symbols.

  17. Polytope of Correct (Linear Programming) Decoding and Low-Weight Pseudo-Codewords

    CERN Document Server

    Chertkov, Michael

    2011-01-01

    We analyze Linear Programming (LP) decoding of graphical binary codes operating over soft-output, symmetric and log-concave channels. We show that the error-surface, separating domain of the correct decoding from domain of the erroneous decoding, is a polytope. We formulate the problem of finding the lowest-weight pseudo-codeword as a non-convex optimization (maximization of a convex function) over a polytope, with the cost function defined by the channel and the polytope defined by the structure of the code. This formulation suggests new heuristics for finding the lowest weight pseudo-codewords, which is advantageous in comparison with the one discussed previously, in particular because of being provably monotonic and also resulting in more frequent discoveries of pseudo-codewords with the lowest weights. The algorithm performance is tested on the example of the Tanner [155, 64, 20] code over the Additive White Gaussian Noise (AWGN) channel.

  18. Coupled Receiver/Decoders for Low-Rate Turbo Codes

    Science.gov (United States)

    Hamkins, Jon; Divsalar, Dariush

    2005-01-01

    been proposed for receiving weak single- channel phase-modulated radio signals bearing low-rate-turbo-coded binary data. Originally intended for use in receiving telemetry signals from distant spacecraft, the proposed receiver/ decoders may also provide enhanced reception in mobile radiotelephone systems. A radio signal of the type to which the proposal applies comprises a residual carrier signal and a phase-modulated data signal. The residual carrier signal is needed as a phase reference for demodulation as a prerequisite to decoding. Low-rate turbo codes afford high coding gains and thereby enable the extraction of data from arriving radio signals that might otherwise be too weak. In the case of a conventional receiver, if the signal-to-noise ratio (specifically, the symbol energy to one-sided noise power spectral density) of the arriving signal is below approximately 0 dB, then there may not be enough energy per symbol to enable the receiver to recover properly the carrier phase. One could solve the problem at the transmitter by diverting some power from the data signal to the residual carrier. A better solution . a coupled receiver/decoder according to the proposal . could reduce the needed amount of residual carrier power. In all that follows, it is to be understood that all processing would be digital and the incoming signals to be processed would be, more precisely, outputs of analog-to-digital converters that preprocess the residual carrier and data signals at a rate of multiple samples per symbol. The upper part of the figure depicts a conventional receiving system, in which the receiver and decoder are uncoupled, and which is also called a non-data-aided system because output data from the decoder are not used in the receiver to aid in recovering the carrier phase. The receiver tracks the carrier phase from the residual carrier signal and uses the carrier phase to wipe phase noise off the data signal. The receiver typically includes a phase-locked loop

  19. Multichannel Error Correction Code Decoder

    Science.gov (United States)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  20. Fingerprinting with Minimum Distance Decoding

    CERN Document Server

    Lin, Shih-Chun; Gamal, Hesham El

    2007-01-01

    This work adopts an information theoretic framework for the design of collusion-resistant coding/decoding schemes for digital fingerprinting. More specifically, the minimum distance decision rule is used to identify 1 out of t pirates. Achievable rates, under this detection rule, are characterized in two distinct scenarios. First, we consider the averaging attack where a random coding argument is used to show that the rate 1/2 is achievable with t=2 pirates. Our study is then extended to the general case of arbitrary $t$ highlighting the underlying complexity-performance tradeoff. Overall, these results establish the significant performance gains offered by minimum distance decoding as compared to other approaches based on orthogonal codes and correlation detectors. In the second scenario, we characterize the achievable rates, with minimum distance decoding, under any collusion attack that satisfies the marking assumption. For t=2 pirates, we show that the rate $1-H(0.25)\\approx 0.188$ is achievable using an ...

  1. LDPC Decoding on GPU for Mobile Device

    Directory of Open Access Journals (Sweden)

    Yiqin Lu

    2016-01-01

    Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.

  2. Towards joint decoding of Tardos fingerprinting codes

    CERN Document Server

    Meerwald, Peter

    2011-01-01

    The class of joint decoder of probabilistic fingerprinting codes is of utmost importance in theoretical papers to establish the concept of fingerprint capacity. However, no implementation supporting a large user base is known to date. This paper presents an iterative decoder which is, as far as we are aware of, the first practical attempt towards joint decoding. The discriminative feature of the scores benefits on one hand from the side-information of previously accused users, and on the other hand, from recently introduced universal linear decoders for compound channels. Neither the code construction nor the decoder make precise assumptions about the collusion (size or strategy). The extension to incorporate soft outputs from the watermarking layer is straightforward. An intensive experimental work benchmarks the very good performances and offers a clear comparison with previous state-of-the-art decoders.

  3. Reduced-Latency SC Polar Decoder Architectures

    CERN Document Server

    Zhang, Chuan; Parhi, Keshab K

    2011-01-01

    Polar codes have become one of the most favorable capacity achieving error correction codes (ECC) along with their simple encoding method. However, among the very few prior successive cancellation (SC) polar decoder designs, the required long code length makes the decoding latency high. In this paper, conventional decoding algorithm is transformed with look-ahead techniques. This reduces the decoding latency by 50%. With pipelining and parallel processing schemes, a parallel SC polar decoder is proposed. Sub-structure sharing approach is employed to design the merged processing element (PE). Moreover, inspired by the real FFT architecture, this paper presents a novel input generating circuit (ICG) block that can generate additional input signals for merged PEs on-the-fly. Gate-level analysis has demonstrated that the proposed design shows advantages of 50% decoding latency and twice throughput over the conventional one with similar hardware cost.

  4. Analysis of Error Floors of Non-Binary LDPC Codes over MBIOS Channel

    CERN Document Server

    Nozaki, Takayuki; Sakaniwa, Kohichi

    2011-01-01

    In this paper, we investigate the error floors of non-binary low-density parity-check (LDPC) codes transmitted over the memoryless binary-input output-symmetric (MBIOS) channels. We provide a necessary and sufficient condition for successful decoding of zigzag cycle codes over the MBIOS channel by the belief propagation decoder. We consider an expurgated ensemble of non-binary LDPC codes by using the above necessary and sufficient condition, and hence exhibit lower error floors. Finally, we show lower bounds of the error floors for the expurgated LDPC code ensembles over the MBIOS channel.

  5. Just-in-time adaptive decoder engine: a universal video decoder based on MPEG RVC

    OpenAIRE

    Gorin, Jérôme; Yviquel, Hervé; Prêteux, Françoise; Raulet, Mickaël

    2011-01-01

    International audience; In this paper, we introduce the Just-In-Time Adaptive Decoder Engine (Jade) project, which is shipped as part of the Open RVC-CAL Compiler (Orcc) project. Orcc provides a set of open-source software tools for managing decoders standardized within MPEG by the Reconfigurable Video Coding (RVC) experts. In this framework, Jade acts as a Virtual Machine for any decoder description that uses the MPEG RVC paradigm. Jade dynamically generates a native decoder representation s...

  6. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  7. Analysis of error floor of LDPC codes under LP decoding over the BSC

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ

    2009-01-01

    We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.

  8. Provably Efficient Instanton Search Algorithm for LP decoding of LDPC codes over the BSC

    CERN Document Server

    Chilappagari, Shashi Kiran

    2008-01-01

    We consider Linear Programming (LP) decoding of a Low-Density Parity-Check (LDPC) code performing over the Binary Symmetric Channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. Following the approach of [1], we design an efficient algorithm termed the Instanton Search Algorithm (ISA) which, given a random input, generates a set of flips, called BSC-instanton, such that the LP decoder decodes the instanton into a pseudo-codeword distinct from the all-zero-codeword while any reduction of the instanton leads to the all-zero-codeword. We prove that (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any original input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of actual flips in the input. Repeated sufficient number of times, the ISA outcomes the number of unique instantons of different sizes. We illustrate the performance of the algorithm on the [155,64,20] Tanner code and sho...

  9. Four-Dimensional Coded Modulation with Bit-wise Decoders for Future Optical Communications

    CERN Document Server

    Alvarado, Alex

    2014-01-01

    Coded modulation (CM) is the combination of forward error correction (FEC) and multilevel constellations. Coherent optical communication systems result in a four-dimensional (4D) signal space, which naturally leads to 4D-CM transceivers. A practically attractive design paradigm is to use a bit-wise decoder, where the detection process is (suboptimally) separated into two steps: soft-decision demapping followed by binary decoding. In this paper, bit-wise decoders are studied from an information-theoretic viewpoint. 4D constellations with up to 4096 constellation points are considered. Metrics to predict the post-FEC bit-error rate (BER) of bit-wise decoders are analyzed. The mutual information is shown to fail at predicting the post-FEC BER of bit-wise decoders and the so-called generalized mutual information is shown to be a much more robust metric. It is also shown that constellations that transmit and receive information in each polarization and quadrature independently (e.g., PM-QPSK, PM-16QAM, and PM-64QA...

  10. Toric Codes, Multiplicative Structure and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes...... and aligns with decoding by error correcting pairs. We have used the multiplicative structure on toric codes to construct linear secret sharing schemes with \\emph{strong multiplication} via Massey's construction generalizing the Shamir Linear secret sharing shemes constructed from Reed-Solomon codes. We have...... constructed quantum error correcting codes from toric surfaces by the Calderbank-Shor-Steane method....

  11. Improved decoding for a concatenated coding system

    OpenAIRE

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, onl...

  12. Application of Beyond Bound Decoding for High Speed Optical Communications

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José;

    2013-01-01

    This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB.......This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....

  13. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  14. Interacting binaries

    CERN Document Server

    Shore, S N; van den Heuvel, EPJ

    1994-01-01

    This volume contains lecture notes presented at the 22nd Advanced Course of the Swiss Society for Astrophysics and Astronomy. The contributors deal with symbiotic stars, cataclysmic variables, massive binaries and X-ray binaries, in an attempt to provide a better understanding of stellar evolution.

  15. Decoding Generalized Reed-Solomon Codes and Its Application to RLCE Encryption Schemes

    OpenAIRE

    Wang, Yongge

    2017-01-01

    This paper presents a survey on generalized Reed-Solomon codes and various decoding algorithms: Berlekamp-Massey decoding algorithms; Berlekamp-Welch decoding algorithms; Euclidean decoding algorithms; discrete Fourier decoding algorithms, Chien's search algorithm, and Forney's algorithm.

  16. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  17. Decoder for Nonbinary CWS Quantum Codes

    CERN Document Server

    Melo, Nolmar; Portugal, Renato

    2012-01-01

    We present a decoder for nonbinary CWS quantum codes using the structure of union codes. The decoder runs in two steps: first we use a union of stabilizer codes to detect a sequence of errors, and second we build a new code, called union code, that allows to correct the errors.

  18. Overview of Decoding across the Disciplines

    Science.gov (United States)

    Boman, Jennifer; Currie, Genevieve; MacDonald, Ron; Miller-Young, Janice; Yeo, Michelle; Zettel, Stephanie

    2017-01-01

    In this chapter we describe the Decoding the Disciplines Faculty Learning Community at Mount Royal University and how Decoding has been used in new and multidisciplinary ways in the various teaching, curriculum, and research projects that are presented in detail in subsequent chapters.

  19. Design Space of Flexible Multigigabit LDPC Decoders

    Directory of Open Access Journals (Sweden)

    Philipp Schläfer

    2012-01-01

    Full Text Available Multigigabit LDPC decoders are demanded by standards like IEEE 802.15.3c and IEEE 802.11ad. To achieve the high throughput while supporting the needed flexibility, sophisticated architectures are mandatory. This paper comprehensively presents the design space for flexible multigigabit LDPC applications for the first time. The influence of various design parameters on the hardware is investigated in depth. Two new decoder architectures in a 65 nm CMOS technology are presented to further explore the design space. In the past, the memory domination was the bottleneck for throughputs of up to 1 Gbit/s. Our systematic investigation of column- versus row-based partially parallel decoders shows that this is no more a bottleneck for multigigabit architectures. The evolutionary progress in flexible multigigabit LDPC decoder design is highlighted in an extensive comparison of state-of-the-art decoders.

  20. CHANNEL ESTIMATION FOR ITERATIVE DECODING OVER FADING CHANNELS

    Institute of Scientific and Technical Information of China (English)

    K. H. Sayhood; Wu Lenan

    2002-01-01

    A method of coherent detection and channel estimation for punctured convolutional coded binary Quadrature Amplitude Modulation (QAM) signals transmitted over a frequency-flat Rayleigh fading channels used for a digital radio broadcasting transmission is presented. Some known symbols are inserted in the encoded data stream to enhance the channel estimation process.The pilot symbols are used to replace the existing parity symbols so no bandwidth expansion is required. An iterative algorithm that uses decoding information as well as the information contained in the known symbols is used to improve the channel parameter estimate. The scheme complexity grows exponentially with the channel estimation filter length. The performance of the system is compared for a normalized fading rate with both perfect coherent detection (corresponding to a perfect knowledge of the fading process and noise variance) and differential detection of Differential Amplitude Phase Shift Keying (DAPSK). The tradeoff between simplicity of implementation and bit-error-rate performance of different techniques is also compared.

  1. Exact performance analysis of decode-and-forward opportunistic relaying

    KAUST Repository

    Tourki, Kamel

    2010-06-01

    In this paper, we investigate a dual-hop decode-and-forward opportunistic relaying scheme where the source may or may not be able to communicate directly with the destination. In our study, we consider a regenerative relaying scheme in which the decision to cooperate takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We derive an exact closed-form expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation based on the exact statistics of each hop. Unlike existing works where the analysis focused on high signal-to-noise ratio (SNR) regime, such results are important to enable the designers to take decisions regarding practical systems that operate at low SNR regime. We show that performance simulation results coincide with our analytical results.

  2. Turbo decoding using two soft output values

    Institute of Scientific and Technical Information of China (English)

    李建平; 潘申富; 梁庆林

    2004-01-01

    It is well known that turbo decoding always begins from the first component decoder and supposes that the apriori information is "0" at the first iterative decoding. To alternatively start decoding at two component decoders, we can gain two soft output values for the received observation of an input bit. It is obvious that two soft output values comprise more sufficient extrinsic information than only one output value obtained in the conventional scheme since different start points of decoding result in different combinations of the a priori information and the input codewords with different symbol orders due to the permutation of an interleaver. Summarizing two soft output values for erery bit before making hard decisions, we can correct more errors due to their complement. Consequently, turbo codes can achieve better error correcting performance than before in this way. Simulation results show that the performance of turbo codes using the novel proposed decoding scheme can get a growing improvement with the increment of SNR in general compared to the conventional scheme. When the bit error probability is 10-5, the proposed scheme can achieve 0.5 dB asymptotic coding gain or so under the given simulation conditions.

  3. Application of RS Codes in Decoding QR Code

    Institute of Scientific and Technical Information of China (English)

    Zhu Suxia(朱素霞); Ji Zhenzhou; Cao Zhiyan

    2003-01-01

    The QR Code is a 2-dimensional matrix code with high error correction capability. It employs RS codes to generate error correction codewords in encoding and recover errors and damages in decoding. This paper presents several QR Code's virtues, analyzes RS decoding algorithm and gives a software flow chart of decoding the QR Code with RS decoding algorithm.

  4. Three phase full wave dc motor decoder

    Science.gov (United States)

    Studer, P. A. (Inventor)

    1977-01-01

    A three phase decoder for dc motors is disclosed which employs an extremely simple six transistor circuit to derive six properly phased output signals for fullwave operation of dc motors. Six decoding transistors are coupled at their base-emitter junctions across a resistor network arranged in a delta configuration. Each point of the delta configuration is coupled to one of three position sensors which sense the rotational position of the motor. A second embodiment of the invention is disclosed in which photo-optical isolators are used in place of the decoding transistors.

  5. An Encoder/Decoder Scheme of OCDMA Based on Waveguide

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new encoder/decoder scheme of OCDMA based on waveguide isproposed in this paper. The principle as well as the structure of waveguide encoder/decoder is given. It can be seen that all-optical OCDMA encoder/decoder can be realized by the proposed scheme of the waveguide encoder/decoder. It can also make the OCDMA encoder/decoder integrated easily and the access controlled easily. The system based on this scheme can work under the entirely asynchronous condition.

  6. Decoding the Disciplines as a Hermeneutic Practice

    Science.gov (United States)

    Yeo, Michelle

    2017-01-01

    This chapter argues that expert practice is an inquiry that surfaces a hermeneutic relationship between theory, practice, and the world, with implications for new lines of questioning in the Decoding interview.

  7. Coding and decoding in a point-to-point communication using the polarization of the light beam.

    Science.gov (United States)

    Kavehvash, Z; Massoumian, F

    2008-05-10

    A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.

  8. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fo...

  9. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fol...

  10. Facial age affects emotional expression decoding

    Directory of Open Access Journals (Sweden)

    Mara eFölster

    2014-02-01

    Full Text Available Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  11. Facial age affects emotional expression decoding.

    Science.gov (United States)

    Fölster, Mara; Hess, Ursula; Werheid, Katja

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  12. Coding and decoding with dendrites.

    Science.gov (United States)

    Papoutsi, Athanasia; Kastellakis, George; Psarrou, Maria; Anastasakis, Stelios; Poirazi, Panayiota

    2014-02-01

    Since the discovery of complex, voltage dependent mechanisms in the dendrites of multiple neuron types, great effort has been devoted in search of a direct link between dendritic properties and specific neuronal functions. Over the last few years, new experimental techniques have allowed the visualization and probing of dendritic anatomy, plasticity and integrative schemes with unprecedented detail. This vast amount of information has caused a paradigm shift in the study of memory, one of the most important pursuits in Neuroscience, and calls for the development of novel theories and models that will unify the available data according to some basic principles. Traditional models of memory considered neural cells as the fundamental processing units in the brain. Recent studies however are proposing new theories in which memory is not only formed by modifying the synaptic connections between neurons, but also by modifications of intrinsic and anatomical dendritic properties as well as fine tuning of the wiring diagram. In this review paper we present previous studies along with recent findings from our group that support a key role of dendrites in information processing, including the encoding and decoding of new memories, both at the single cell and the network level. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. On Decoding Irregular Tanner Codes

    CERN Document Server

    Even, Guy

    2011-01-01

    We present a new combinatorial characterization for local-optimality of a codeword in irregular Tanner codes. This characterization is a generalization of [Arora, Daskalakis, Steurer; 2009] and [Vontobel; 2010]. The main novelty in this characterization is that it is based on a conical combination of subtrees in the computation trees. These subtrees may have any degree in the local-code nodes and may have any height (even greater than the girth). We prove that local-optimality in this new characterization implies Maximum-Likelihood (ML) optimality and LP-optimality. We also show that it is possible to compute efficiently a certificate for the local-optimality of a codeword given the channel output. We apply this characterization to regular Tanner codes. We prove a lower bound on the noise threshold in channels such as BSC and AWGNC. When the noise is below this lower bound, the probability that LP decoding fails diminishes doubly exponentially in the girth of the Tanner graph. We use local optimality also to ...

  14. Sphere decoding complexity exponent for decoding full rate codes over the quasi-static MIMO channel

    CERN Document Server

    Jalden, Joakim

    2011-01-01

    In the setting of quasi-static multiple-input multiple-output (MIMO) channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic division algebra (CDA) based codes -- the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff (DMT) -- the SD complexity exponent is shown to take a particularly...

  15. Non-Binary Polar Codes using Reed-Solomon Codes and Algebraic Geometry Codes

    CERN Document Server

    Mori, Ryuhei

    2010-01-01

    Polar codes, introduced by Arikan, achieve symmetric capacity of any discrete memoryless channels under low encoding and decoding complexity. Recently, non-binary polar codes have been investigated. In this paper, we calculate error probability of non-binary polar codes constructed on the basis of Reed-Solomon matrices by numerical simulations. It is confirmed that 4-ary polar codes have significantly better performance than binary polar codes on binary-input AWGN channel. We also discuss an interpretation of polar codes in terms of algebraic geometry codes, and further show that polar codes using Hermitian codes have asymptotically good performance.

  16. Decoding of digital magnetic recording with longitudinal magnetization of a tape from a magneto-optical image of stray fields

    Science.gov (United States)

    Lisovskii, F. V.; Mansvetova, E. G.

    2017-05-01

    For digital magnetic recording of encoded information with longitudinal magnetization of the tape, the connection between the domain structure of a storage medium and magneto-optical image of its stray fields obtained using a magnetic film with a perpendicular anisotropy and a large Faraday rotation has been studied. For two-frequency binary code without returning to zero, an algorithm is developed, that allows uniquely decoding of the information recorded on the tape based on analysis of an image of stray fields.

  17. 一种LDPC码的联合译码算法%Joint decoding algorithm of LDPC codes

    Institute of Scientific and Technical Information of China (English)

    方毅; 张建文; 王琳

    2011-01-01

    在加性高斯白噪声(additive white Gaussian noise,AWGN)信道中设计一种低密度奇偶校验(lowdensity parity-check,LDPC)码的最大似然译码算法是一项具有挑战性的工作.麦克斯韦译码算法在二进制擦除信道下具有优越的性能,但把这种算法移植到其他信道却非常困难.引入了信道转换的思想实现两个不同信道之间的转换,并利用该方法成功地将麦克斯韦算法应用到AWGN信道中,提出了一种将信度传播算法和麦克斯韦算法有机结合的联合译码算法,即信度传播-麦克斯韦译码算法,该算法可缩小与最大似然译码算法之间的性能差距.仿真表明,该译码算法可打破大多数小陷阱集从而获得比信度传播译码算法更低的误帧率,并且可消除大多数信度传播译码后出现的小错误.%Designing a realizable maximum likelihood (ML) decoder for low-density parity-check ( LDPC) codes is always a challenging work over an additive white Gaussian noise (AWGN) channel. Although Maxwell decoder is well known for its excellent performance over the binary erasure channel ( BEC) , it seems that generalizing this algorithm for other channels is difficult. This paper introduces an idea called channel transformation which could realize the conversation between two different channels. A Maxwell decoder is applied to an AWGN channel. In terms of this method, and a joint decoder-BP-Maxwell (BM) decoder is proposed which combines a belief propagation (BP) decoder and a Maxwell decoder, to reduce the gap to the ML decoder in performance.Simulation results show that the BM decoding algorithm could break most small trapping sets to accomplish a lower frame error rate (FER) Moreover it also could eliminate most of the small-scale errors compared with a BP decoder.

  18. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  19. Error Exponents of Optimum Decoding for the Interference Channel

    CERN Document Server

    Etkin, Raul; Ordentlich, Erik

    2008-01-01

    Exponential error bounds for the finite-alphabet interference channel (IFC) with two transmitter-receiver pairs, are investigated under the random coding regime. Our focus is on optimum decoding, as opposed to heuristic decoding rules that have been used in previous works, like joint typicality decoding, decoding based on interference cancellation, and decoding that considers the interference as additional noise. Indeed, the fact that the actual interfering signal is a codeword and not an i.i.d. noise process complicates the application of conventional techniques to the performance analysis of the optimum decoder. Using analytical tools rooted in statistical physics, we derive a single letter expression for error exponents achievable under optimum decoding and demonstrate strict improvement over error exponents obtainable using suboptimal decoding rules, but which are amenable to more conventional analysis.

  20. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  1. VLSI Design of a Turbo Decoder

    Science.gov (United States)

    Fang, Wai-Chi

    2007-01-01

    A very-large-scale-integrated-circuit (VLSI) turbo decoder has been designed to serve as a compact, high-throughput, low-power, lightweight decoder core of a receiver in a data-communication system. In a typical contemplated application, such a decoder core would be part of a single integrated circuit that would include the rest of the receiver circuitry and possibly some or all of the transmitter circuitry, all designed and fabricated together according to an advanced communication-system-on-a-chip design concept. Turbo codes are forward-error-correction (FEC) codes. Relative to older FEC codes, turbo codes enable communication at lower signal-to-noise ratios and offer greater coding gain. In addition, turbo codes can be implemented by relatively simple hardware. Therefore, turbo codes have been adopted as standard for some advanced broadband communication systems.

  2. Online Testable Decoder using Reversible Logic

    Directory of Open Access Journals (Sweden)

    Hemalatha. K. N. Manjula B. B. Girija. S

    2012-02-01

    Full Text Available The project proposes to design and test 2 to 4 reversible Decoder circuit with arbitrary number of gates to an online testable reversible one and is independent of the type of reversible gate used. The constructed circuit can detect any single bit errors and to convert a decoder circuit that is designed by reversible gates to an online testable reversible decoder circuit. Conventional digital circuits dissipate a significant amount of energy because bits of information are erased during the logic operations. Thus if logic gates are designed such that the information bits are not destroyed, the power consumption can be reduced. The information bits are not lost in case of a reversible computation. Reversible logic can be used to implement any Boolean logic function.

  3. Neuroprosthetic Decoder Training as Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2016-05-01

    Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  4. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  5. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  6. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  7. Bounds on List Decoding Gabidulin Codes

    CERN Document Server

    Wachter-Zeh, Antonia

    2012-01-01

    An open question about Gabidulin codes is whether polynomial-time list decoding beyond half the minimum distance is possible or not. In this contribution, we give a lower and an upper bound on the list size, i.e., the number of codewords in a ball around the received word. The lower bound shows that if the radius of this ball is greater than the Johnson radius, this list size can be exponential and hence, no polynomial-time list decoding is possible. The upper bound on the list size uses subspace properties.

  8. MAP decoding of variable length codes over noisy channels

    Science.gov (United States)

    Yao, Lei; Cao, Lei; Chen, Chang Wen

    2005-10-01

    In this paper, we discuss the maximum a-posteriori probability (MAP) decoding of variable length codes(VLCs) and propose a novel decoding scheme for the Huffman VLC coded data in the presence of noise. First, we provide some simulation results of VLC MAP decoding and highlight some features that have not been discussed yet in existing work. We will show that the improvement of MAP decoding over the conventional VLC decoding comes mostly from the memory information in the source and give some observations regarding the advantage of soft VLC MAP decoding over hard VLC MAP decoding when AWGN channel is considered. Second, with the recognition that the difficulty in VLC MAP decoding is the lack of synchronization between the symbol sequence and the coded bit sequence, which makes the parsing from the latter to the former extremely complex, we propose a new MAP decoding algorithm by integrating the information of self-synchronization strings (SSSs), one important feature of the codeword structure, into the conventional MAP decoding. A consistent performance improvement and decoding complexity reduction over the conventional VLC MAP decoding can be achieved with the new scheme.

  9. TC81201F MPEG2 decoder LSI; MPEG2 decoder LSI TC81201F

    Energy Technology Data Exchange (ETDEWEB)

    Kitagaki, K. [Toshiba Corp., Tokyo (Japan)

    1996-04-01

    The moving picture expert group 2 (MPEG2) decoder LSI series have been developed, in order to meet needs for diversifying multi-media systems. MPEG2 is an international standard for coding moving pictures, capable of compressing large quantities of moving picture data. Therefore, the decoder LSI for the MPEG2 signals is also a key device to realize the multi-media systems. The system needs are diversifying, as seen in different audio codes for DVD`s and digital satellite broadcasting systems (DBS`s). The company has developed, based on decoder LSI T9556 announced in 1994, TC81200F for mass production, TC81201F optimized for the DVD system and TC81211F as the one-chip device for MPEG1 decoder. Chip cost and system cost of TC81201F are reduced by optimizing functions and circuits, and by reducing external memories. 4 refs., 4 figs., 1 tab.

  10. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    achieve a high coding throughput, and reduce energy consumption.We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number...

  11. Older Adults Have Difficulty in Decoding Sarcasm

    Science.gov (United States)

    Phillips, Louise H.; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley

    2015-01-01

    Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are…

  12. A chemical system that mimics decoding operations.

    Science.gov (United States)

    Giansante, Carlo; Ceroni, Paola; Venturi, Margherita; Sakamoto, Junji; Schlüter, A Dieter

    2009-02-23

    The chemical information stored in equilibrium mixtures of molecular species is larger than the sum of information carried by the individual molecules. Protonation equilibria in dilute dichloromethane solution of a shape-persistent macrocycle bearing two 2,2'-bipyridine units and two Coumarin 2 moieties (see figure) can be exploited to mimic decoding operations.

  13. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...

  14. BCS-18A command decoder-selector

    Science.gov (United States)

    Laping, H.

    1980-08-01

    This report describes an 18-channel command decoder-selector which operates in conjunction with an HF command receiver to allow secure and reliable radio control of high altitude balloon payloads. A detailed technical description and test results are also included.

  15. 基于熵编码CABAC的信源信道联合解码器%Joint Source Channel Decoder Based on CABAC

    Institute of Scientific and Technical Information of China (English)

    王粤; 解蓉

    2013-01-01

    H.264的熵编码都采用基于上下文自适应二进制算术编码(CABAC),能达到较高的压缩性能,但对信道误码非常敏感.文中提出了一种基于CABAC的算数码变长码联合解码算法,联合信源信道算数码解码之后的信息作为变长码的输入信息,再通过变长码格状图搜索获得最佳的符号序列.同时,在算数码解码部分可以利用变长码的码字结构信息来删除无效搜索路径,提高解码性能.仿真实验表明,该联合迭代解码算法明显优于传统的分离解码器.%H.264 has been widely used in recently years.The entropy coding method used in H.264 is context-based adaptive binary arithmetic code(CABAC).Although CABAC can achieve high compression,it is very sensitive to channel errors.In this paper,a novel joint arithmetic and variable length decoding algorithm based on CABAC was proposed.The transmitted sequence bits were first decoded by a joint source channel arithmetic decoder,the output of which was decoded by joint variable length decoder(JVLD).The trellis graph was used to search the best symbol sequence.Furthermore,at the process of joint arithmetic decoding,the searching path was canceled if the decoded sequence did not conform to the vlc structure,thus achieving better decoding performance.Experimental result indicates that the proposed algorithm is superior to the separate scheme.

  16. Deconstructing multivariate decoding for the study of brain function.

    Science.gov (United States)

    Hebart, Martin N; Baker, Chris I

    2017-08-04

    Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.

  17. Unsupervised adaptation of brain machine interface decoders

    Directory of Open Access Journals (Sweden)

    Tayfun eGürel

    2012-11-01

    Full Text Available The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i calibration phases interrupt the autonomous operation of the BMI and (ii between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and nonstationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.

  18. Belief propagation decoding of quantum channels by passing quantum messages

    Science.gov (United States)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  19. Binary effectivity rules

    DEFF Research Database (Denmark)

    Keiding, Hans; Peleg, Bezalel

    2006-01-01

    is binary if it is rationalized by an acyclic binary relation. The foregoing result motivates our definition of a binary effectivity rule as the effectivity rule of some binary SCR. A binary SCR is regular if it satisfies unanimity, monotonicity, and independence of infeasible alternatives. A binary...... effectivity rule is regular if it is the effectivity rule of some regular binary SCR. We characterize completely the family of regular binary effectivity rules. Quite surprisingly, intrinsically defined von Neumann-Morgenstern solutions play an important role in this characterization...

  20. CHANNEL ESTIMATION FOR ITERATIVE DECODING OVER FADING CHANNELS

    Institute of Scientific and Technical Information of China (English)

    K.H.Sayhood; WuLenan

    2002-01-01

    A method of coherent detection and channel estimation for punctured convolutional coded binary Quadrature Amplitude Modulation (QAM) signals transmitted over a frequency-flat Rayleigh fading channels used for a digital radio broadcasting transmission is presented.Some known symbols are inserted in the encoded data stream to enhance the channel estimation process.The puilot symbols are used to replace the existing parity symbols so no bandwidth expansion is required.An iterative algorithm that uses decoding information as well as the information contained in the known symbols is used to improve the channel parameter estimate.The scheme complexity grows exponentially with the channel estimation filter length,The performance of the system is compared for a normalized fading rate with both perfect coherent detection(Corresponding to a perfect knowledge of the fading process and noise variance)and differential detection of Differential Amplitude Phase Shift Keying (DAPSK).The tradeoff between simplicity of implementation and bit-error-rate performance of different techniques is also compared.

  1. Competitive minimax universal decoding for several ensembles of random codes

    CERN Document Server

    Akirav, Yaniv

    2007-01-01

    Universally achievable error exponents pertaining to certain families of channels (most notably, discrete memoryless channels (DMC's)), and various ensembles of random codes, are studied by combining the competitive minimax approach, proposed by Feder and Merhav, with Chernoff bound and Gallager's techniques for the analysis of error exponents. In particular, we derive a single--letter expression for the largest, universally achievable fraction $\\xi$ of the optimum error exponent pertaining to the optimum ML decoding. Moreover, a simpler single--letter expression for a lower bound to $\\xi$ is presented. To demonstrate the tightness of this lower bound, we use it to show that $\\xi=1$, for the binary symmetric channel (BSC), when the random coding distribution is uniform over: (i) all codes (of a given rate), and (ii) all linear codes, in agreement with well--known results. We also show that $\\xi=1$ for the uniform ensemble of systematic linear codes, and for that of time--varying convolutional codes in the bit...

  2. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  3. The brain's silent messenger: using selective attention to decode human thought for brain-based communication.

    Science.gov (United States)

    Naci, Lorina; Cusack, Rhodri; Jia, Vivian Z; Owen, Adrian M

    2013-05-29

    The interpretation of human thought from brain activity, without recourse to speech or action, is one of the most provoking and challenging frontiers of modern neuroscience. In particular, patients who are fully conscious and awake, yet, due to brain damage, are unable to show any behavioral responsivity, expose the limits of the neuromuscular system and the necessity for alternate forms of communication. Although it is well established that selective attention can significantly enhance the neural representation of attended sounds, it remains, thus far, untested as a response modality for brain-based communication. We asked whether its effect could be reliably used to decode answers to binary (yes/no) questions. Fifteen healthy volunteers answered questions (e.g., "Do you have brothers or sisters?") in the fMRI scanner, by selectively attending to the appropriate word ("yes" or "no"). Ninety percent of the answers were decoded correctly based on activity changes within the attention network. The majority of volunteers conveyed their answers with less than 3 min of scanning, suggesting that this technique is suited for communication in a reasonable amount of time. Formal comparison with the current best-established fMRI technique for binary communication revealed improved individual success rates and scanning times required to detect responses. This novel fMRI technique is intuitive, easy to use in untrained participants, and reliably robust within brief scanning times. Possible applications include communication with behaviorally nonresponsive patients.

  4. Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging?

    Directory of Open Access Journals (Sweden)

    Liana C L Portugal

    Full Text Available High comorbidity among pediatric disorders characterized by behavioral and emotional dysregulation poses problems for diagnosis and treatment, and suggests that these disorders may be better conceptualized as dimensions of abnormal behaviors. Furthermore, identifying neuroimaging biomarkers related to dimensional measures of behavior may provide targets to guide individualized treatment. We aimed to use functional neuroimaging and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioural and emotional dysregulation at two different time points.A sample of fifty-seven youth (mean age: 14.5 years; 32 males was selected from a multi-site study of youth with parent-reported behavioral and emotional dysregulation. Participants performed a block-design reward paradigm during functional Magnetic Resonance Imaging (fMRI. Pattern regression analyses consisted of Relevance Vector Regression (RVR and two cross-validation strategies implemented in the Pattern Recognition for Neuroimaging toolbox (PRoNTo. Medication was treated as a binary confounding variable. Decoded and actual clinical scores were compared using Pearson's correlation coefficient (r and mean squared error (MSE to evaluate the models. Permutation test was applied to estimate significance levels.Relevance Vector Regression identified patterns of neural activity associated with symptoms of behavioral and emotional dysregulation at the initial study screen and close to the fMRI scanning session. The correlation and the mean squared error between actual and decoded symptoms were significant at the initial study screen and close to the fMRI scanning session. However, after controlling for potential medication effects, results remained significant only for decoding symptoms at the initial study screen. Neural regions with the highest contribution to the pattern regression model

  5. Extended Non-Binary Low-Density Parity-Check Codes over Erasure Channels

    CERN Document Server

    Sy, Lam Pham; Declercq, David

    2011-01-01

    Based on the extended binary image of non-binary LDPC codes, we propose a method for generating extra redundant bits, such as to decreases the coding rate of a mother code. The proposed method allows for using the same decoder, regardless of how many extra redundant bits have been produced, which considerably increases the flexibility of the system without significantly increasing its complexity. Extended codes are also optimized for the binary erasure channel, by using density evolution methods. Nevertheless, the results presented in this paper can easily be extrapolated to more general channel models.

  6. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  7. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  8. Space vehicle Viterbi decoder. [data converters, algorithms

    Science.gov (United States)

    1975-01-01

    The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.

  9. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1998-01-01

    The study has been divided into two phases. The purpose of Phase 1 of the study was to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. After selection of which specific...... separated by a sync marker and protected by error-correcting codes. We first give a survey of trends within the area of space modulation systems. We then discuss and define the interfaces and operating modes of the relevant system components. We present a list of system configurations that we find...... potentially useful.Algorithms for frame synchronization are described and analyzed. Further, the high level architecture of units that contain frame synchronization and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder...

  10. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also......, the systems use some form of QPSK modulation and transmit data in frames separated by a sync marker and protected by error-correcting codes. We first give a survey of trends within the area of space modulation systems. We then discuss and define the interfaces and operating modes of the relevant system...... components. Node synchronization performed within a Viterbi decoder is discussed, and algorithms for frame synchronization are described and analyzed. We present a list of system configurations that we find potentially useful. Further, the high level architecture of units that contain frame synchronization...

  11. Hardware Implementation of Serially Concatenated PPM Decoder

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in

  12. Olfactory Decoding Method Using Neural Spike Signals

    Institute of Scientific and Technical Information of China (English)

    Kyung-jin YOU; Hyun-chool SHIN

    2010-01-01

    This paper presents a travel method for inferring the odor based on naval activities observed from rats'main olfactory bulbs.Mufti-channel extmcellular single unit recordings are done by microwire electrodes(Tungsten,50μm,32 channels)innplanted in the mitral/tufted cell layers of the main olfactory bulb of the anesthetized rats to obtain neural responses to various odors.Neural responses as a key feature are measured by subtraction firing rates before stimulus from after.For odor irderenoe,a decoding method is developed based on the ML estimation.The results show that the average decoding acauacy is about 100.0%,96.0%,and 80.0% with three rats,respectively.This wait has profound implications for a novel brain-madune interface system far odor inference.

  13. Simplified Digital Subband Coders And Decoders

    Science.gov (United States)

    Glover, Daniel R.

    1994-01-01

    Simplified digital subband coders and decoders developed for use in converting digitized samples of source signals into compressed and encoded forms that maintain integrity of source signals while enabling transmission at low data rates. Examples of coding methods used in subbands include coarse quantization in high-frequency subbands, differential coding, predictive coding, vector quantization, and entropy or statistical coding. Encoders simpler, less expensive and operate rapidly enough to process video signals.

  14. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial......, and a reduction of the problem of factoring the Q-polynomial to the problem of factoring a univariate polynomial over a large finite field....

  15. DSP Specific Optimized Implementation of Viterbi Decoder

    Directory of Open Access Journals (Sweden)

    Yame Asfia

    2010-04-01

    Full Text Available Due to the rapid changing and flexibility of Wireless Communication protocols, there is a desire to move from hardware to software/firmware implementation in DSPs. High data rate requirements suggest highly optimized firmware implementation in terms of execution speed and high memory requirements. This paper suggests optimization levels for the implementation of a viable Viterbi decoding algorithm (rate ½ on a commercial off-the-shelf DSP.

  16. Kernel Temporal Differences for Neural Decoding

    Directory of Open Access Journals (Sweden)

    Jihye Bae

    2015-01-01

    Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.

  17. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  18. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  19. FFT Algorithm for Binary Extension Finite Fields and Its Application to Reed–Solomon Codes

    KAUST Repository

    Lin, Sian-Jheng

    2016-08-15

    Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity of order O(n lg(n)), where n is the number of points evaluated in FFT. In this paper, we reformulate this FFT algorithm, such that it can be easier understood and be extended to develop frequency-domain decoding algorithms for (n = 2(m), k) systematic Reed-Solomon (RS) codes over F-2m, m is an element of Z(+), with n-k a power of two. First, the basis of syndrome polynomials is reformulated in the decoding procedure so that the new transforms can be applied to the decoding procedure. A fast extended Euclidean algorithm is developed to determine the error locator polynomial. The computational complexity of the proposed decoding algorithm is O(n lg(n-k)+(n-k)lg(2)(n-k)), improving upon the best currently available decoding complexity O(n lg(2)(n) lg lg(n)), and reaching the best known complexity bound that was established by Justesen in 1976. However, Justesen\\'s approach is only for the codes over some specific fields, which can apply Cooley-Tukey FFTs. As revealed by the computer simulations, the proposed decoding algorithm is 50 times faster than the conventional one for the (2(16), 2(15)) RS code over F-216.

  20. Performance Analysis of Viterbi Decoder for Wireless Applications

    Directory of Open Access Journals (Sweden)

    G.Sivasankar

    2014-07-01

    Full Text Available Viterbi decoder is employed in wireless communication to decode the convolutional codes; those codes are used in every robust digital communication systems. Convolutional encoding and viterbi decoding is a powerful method for forward error correction. This paper deals with synthesis and implementation of viterbi decoder with a constraint length of three as well as seven and the code rate of ½ in FPGA (Field Programmable Gate Array. The performance of viterbi decoder is analyzed in terms of resource utilization. The design of viterbi decoder is simulated using Verilog HDL. It is synthesized and implemented using Xilinx 9.1ise and Spartan 3E Kit. It is compatible with many common standards such as 3GPP, IEEE 802.16 and LTE.

  1. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    Directory of Open Access Journals (Sweden)

    Jun Jin Kong

    2003-12-01

    Full Text Available We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS units as the number of trellis states. By replacing each delay (or storage element in state metrics memory (or path metrics memory and path memory (or survival memory with I delays, interleaved Viterbi decoder is obtained where I is the interleaving degree. The decoding speed of this decoder architecture is as fast as the operating clock speed. The latency of proposed interleaved Viterbi decoder is “decoding depth (DD × interleaving degree (I+ extra delays (A,” which increases linearly with the interleaving degree I.

  2. A Modified max-log-MAP Decoding Algorithm for Turbo Decoding

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Turbo decoding is iterative decoding, and the MAP algorithm is optimal in terms of performance in Turbo decoding. The log-MAP algorithm is the MAP executed in the logarithmic domain, so it is also optimal. Both the MAP and the log-MAP algorithm are complicated for implementation. The max-log-MAP algorithm is derived from the log-MAP with approximation, which is simply compared with the log-MAP algorithm but is suboptimal in terms of performance. A modified max-log-MAP algorithm is presented in this paper, based on the Taylor series of logarithm and exponent. Analysis and simulation results show that the modified max-log-MAP algorithm outperforms the max-log-MAP algorithm with almost the same complexity.

  3. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  4. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  5. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    Science.gov (United States)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  6. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Chen Huijun

    2008-01-01

    Full Text Available Abstract We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  7. Joint Decoding of Concatenated VLEC and STTC System

    Directory of Open Access Journals (Sweden)

    Huijun Chen

    2008-07-01

    Full Text Available We consider the decoding of wireless communication systems with both source coding in the application layer and channel coding in the physical layer for high-performance transmission over fading channels. Variable length error correcting codes (VLECs and space time trellis codes (STTCs are used to provide bandwidth efficient data compression as well as coding and diversity gains. At the receiver, an iterative joint source and space time decoding scheme are developed to utilize redundancy in both STTC and VLEC to improve overall decoding performance. Issues such as the inseparable systematic information in the symbol level, the asymmetric trellis structure of VLEC, and information exchange between bit and symbol domains have been considered in the maximum a posteriori probability (MAP decoding algorithm. Simulation results indicate that the developed joint decoding scheme achieves a significant decoding gain over the separate decoding in fading channels, whether or not the channel information is perfectly known at the receiver. Furthermore, how rate allocation between STTC and VLEC affects the performance of the joint source and space-time decoder is investigated. Different systems with a fixed overall information rate are studied. It is shown that for a system with more redundancy dedicated to the source code and a higher order modulation of STTC, the joint decoding yields better performance, though with increased complexity.

  8. Grasp movement decoding from premotor and parietal cortex.

    Science.gov (United States)

    Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg

    2011-10-05

    Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.

  9. Efficient Decoding of Partial Unit Memory Codes of Arbitrary Rate

    CERN Document Server

    Wachter-Zeh, Antonia; Bossert, Martin

    2012-01-01

    Partial Unit Memory (PUM) codes are a special class of convolutional codes, which are often constructed by means of block codes. Decoding of PUM codes may take advantage of existing decoders for the block code. The Dettmar--Sorger algorithm is an efficient decoding algorithm for PUM codes, but allows only low code rates. The same restriction holds for several known PUM code constructions. In this paper, an arbitrary-rate construction, the analysis of its distance parameters and a generalized decoding algorithm for PUM codes of arbitrary rate are provided. The correctness of the algorithm is proven and it is shown that its complexity is cubic in the length.

  10. Oriented modulation for watermarking in direct binary search halftone images.

    Science.gov (United States)

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der

    2012-09-01

    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  11. LDPC 码最优化译码算法%Optimization decoding algorithm for LDPC codes

    Institute of Scientific and Technical Information of China (English)

    林志国; 彭卫东; 林晋福; 檀蕊莲; 宋晓鸥

    2014-01-01

    为了提高离散高斯信道下二进制低密度奇偶校验码(low-density parity-check code,LDPC)最优化译码算法的性能和效率,提出了一种改进的 LDPC 码最优化译码算法。首先,通过理论分析和数学推导,构建了译码问题的数学模型;然后,论证并给出了针对该模型的最优化译码算法;最后,基于 VC6.0平台进行了译码的性能和效率仿真并与其他算法进行比较。仿真结果表明,在误码率性能和译码效率上,新算法优于改进前的算法;在误码率性能上,新算法也优于常用的最小和译码算法。仿真结果与理论分析吻合。%An improved optimization decoding algorithm is proposed for binary low-density parity-check (LDPC)codes under any discrete Gaussian channel.First,through theory analysis and mathematical deriva-tion,the mathematical model is constructed for the decoding problem.Then,the optimization decoding algo-rithm is demonstrated for the model.Finally,several simulations are carried out on VC6.0 for the algorithm’s decoding performance and efficiency and the comparison with other algorithms is made.The results show that the algorithm outperforms the former algorithm on either bit-error rate or decoding efficiency and also outper-forms the common min-sum algorithm on bit-error rate.The results are in good agreement with the analysis.

  12. A primer on equalization, decoding and non-iterative joint equalization and decoding

    Science.gov (United States)

    Myburgh, Hermanus C.; Olivier, Jan C.

    2013-12-01

    In this article, a general model for non-iterative joint equalization and decoding is systematically derived for use in systems transmitting convolutionally encoded BPSK-modulated information through a multipath channel, with and without interleaving. Optimal equalization and decoding are discussed first, by presenting the maximum likelihood sequence estimation and maximum a posteriori probability algorithms and relating them to equalization in single-carrier channels with memory, and to the decoding of convolutional codes. The non-iterative joint equalizer/decoder (NI-JED) is then derived for the case where no interleaver is used, as well as for the case when block interleavers of varying depths are used, and complexity analyses are performed in each case. Simulation results are performed to compare the performance of the NI-JED to that of a conventional turbo equalizer (CTE), and it is shown that the NI-JED outperforms the CTE, although at much higher computational cost. This article serves to explain the state-of-the-art to students and professionals in the field of wireless communication systems, presenting these fundamental topics clearly and concisely.

  13. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  14. GLRT-Optimal Noncoherent Lattice Decoding

    CERN Document Server

    Ryan, Daniel J; Clarkson, I Vaughan L

    2007-01-01

    This paper presents new low-complexity lattice-decoding algorithms for noncoherent block detection of QAM and PAM signals over complex-valued fading channels. The algorithms are optimal in terms of the generalized likelihood ratio test (GLRT). The computational complexity is polynomial in the block length; making GLRT-optimal noncoherent detection feasible for implementation. We also provide even lower complexity suboptimal algorithms. Simulations show that the suboptimal algorithms have performance indistinguishable from the optimal algorithms. Finally, we consider block based transmission, and propose to use noncoherent detection as an alternative to pilot assisted transmission (PAT). The new technique is shown to outperform PAT.

  15. Eclipsing binaries in open clusters

    DEFF Research Database (Denmark)

    Southworth, John; Clausen, J.V.

    2006-01-01

    Stars: fundamental parameters - Stars : binaries : eclipsing - Stars: Binaries: spectroscopic - Open clusters and ass. : general Udgivelsesdato: 5 August......Stars: fundamental parameters - Stars : binaries : eclipsing - Stars: Binaries: spectroscopic - Open clusters and ass. : general Udgivelsesdato: 5 August...

  16. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  17. A Method of Coding and Decoding in Underwater Image Transmission

    Institute of Scientific and Technical Information of China (English)

    程恩

    2001-01-01

    A new method of coding and decoding in the system of underwater image transmission is introduced, including the rapid digital frequency synthesizer in multiple frequency shift keying,image data generator, image grayscale decoder with intelligent fuzzy algorithm, image restoration and display on microcomputer.

  18. Interim Manual for the DST: Decoding Skills Test.

    Science.gov (United States)

    Richardson, Ellis; And Others

    The Decoding Skills Test (DST) was developed to provide a detailed measurement of decoding skills which could be used in research on developmental dyslexia. Another purpose of the test is to provide a diagnostic-prescriptive instrument to be used in the evaluation of, and program planning for, children needing remedial reading. The test is…

  19. A VLSI design for a trace-back Viterbi decoder

    Science.gov (United States)

    Truong, T. K.; Shih, Ming-Tang; Reed, Irving S.; Satorius, E. H.

    1992-01-01

    A systolic Viterbi decoder for convolutional codes is developed which uses the trace-back method to reduce the amount of data needed to be stored in registers. It is shown that this new algorithm requires a smaller chip size and achieves a faster decoding time than other existing methods.

  20. Socialization Processes in Encoding and Decoding: Learning Effective Nonverbal Behavior.

    Science.gov (United States)

    Feldman, Robert S.; Coats, Erik

    This study examined the relationship of nonverbal encoding and decoding skills to the level of exposure to television. Subjects were children in second through sixth grade. Three nonverbal skills (decoding, spontaneous encoding, and posed encoding) were assessed for each of five emotions: anger, disgust, fear or surprise, happiness, and sadness.…

  1. Decoding Information in the Human Hippocampus: A User's Guide

    Science.gov (United States)

    Chadwick, Martin J.; Bonnici, Heidi M.; Maguire, Eleanor A.

    2012-01-01

    Multi-voxel pattern analysis (MVPA), or "decoding", of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded,…

  2. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  3. Building Bridges from the Decoding Interview to Teaching Practice

    Science.gov (United States)

    Pettit, Jennifer; Rathburn, Melanie; Calvert, Victoria; Lexier, Roberta; Underwood, Margot; Gleeson, Judy; Dean, Yasmin

    2017-01-01

    This chapter describes a multidisciplinary faculty self-study about reciprocity in service-learning. The study began with each coauthor participating in a Decoding interview. We describe how Decoding combined with collaborative self-study had a positive impact on our teaching practice.

  4. Decoding Technique of Concatenated Hadamard Codes and Its Performance

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The decoding technique of concatenated Hadamard codes and its performance are studied. Efficient softin-soft-out decoding algorithms based on the fast Hadamard transform are developed. Performance required by CDMA mobile or PCS speech services, e.g. , BER= 10-3, can be achieved at Eb/No = 0.9 dB using short interleaving length of 192 bits.

  5. Latent state-space models for neural decoding.

    Science.gov (United States)

    Aghagolzadeh, Mehdi; Truccolo, Wilson

    2014-01-01

    Ensembles of single-neurons in motor cortex can show strong low-dimensional collective dynamics. In this study, we explore an approach where neural decoding is applied to estimated low-dimensional dynamics instead of to the full recorded neuronal population. A latent state-space model (SSM) approach is used to estimate the low-dimensional neural dynamics from the measured spiking activity in population of neurons. A second state-space model representation is then used to decode kinematics, via a Kalman filter, from the estimated low-dimensional dynamics. The latent SSM-based decoding approach is illustrated on neuronal activity recorded from primary motor cortex in a monkey performing naturalistic 3-D reach and grasp movements. Our analysis show that 3-D reach decoding performance based on estimated low-dimensional dynamics is comparable to the decoding performance based on the full recorded neuronal population.

  6. An efficient VLSI implementation of H.264/AVC entropy decoder

    Institute of Scientific and Technical Information of China (English)

    Jongsik; PARK; Jeonhak; MOON; Seongsoo; LEE

    2010-01-01

    <正>This paper proposes an efficient H.264/AVC entropy decoder.It requires no ROM/RAM fabrication process that decreases fabrication cost and increases operation speed.It was achieved by optimizing lookup tables and internal buffers,which significantly improves area,speed,and power.The proposed entropy decoder does not exploit embedded processor for bitstream manipulation, which also improves area,speed,and power.Its gate counts and maximum operation frequency are 77515 gates and 175MHz in 0.18um fabrication process,respectively.The proposed entropy decoder needs 2303 cycles in average for one macroblock decoding.It can run at 28MHz to meet the real-time processing requirement for CIF format video decoding on mobile applications.

  7. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    EEG based brain state decoding has numerous applications. State of the art decoding is based on processing of the multivariate sensor space signal, however evidence is mounting that EEG source reconstruction can assist decoding. EEG source imaging leads to high-dimensional representations...... of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...... imaging does not lead to an improved decoding. We design a distributed pipeline in which the classifier has access to brain wide features which in turn does lead to a 15% reduction in the error rate using source space features. Hence, our work presents supporting evidence for the hypothesis that source...

  8. FPGA Prototyping of RNN Decoder for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Salcic Zoran

    2006-01-01

    Full Text Available This paper presents prototyping of a recurrent type neural network (RNN convolutional decoder using system-level design specification and design flow that enables easy mapping to the target FPGA architecture. Implementation and the performance measurement results have shown that an RNN decoder for hard-decision decoding coupled with a simple hard-limiting neuron activation function results in a very low complexity, which easily fits into standard Altera FPGA. Moreover, the design methodology allowed modeling of complete testbed for prototyping RNN decoders in simulation and real-time environment (same FPGA, thus enabling evaluation of BER performance characteristics of the decoder for various conditions of communication channel in real time.

  9. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  10. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  11. Improved decoding of limb-state feedback from natural sensors.

    Science.gov (United States)

    Wagenaar, J B; Ventura, V; Weber, D J

    2009-01-01

    Limb state feedback is of great importance for achieving stable and adaptive control of FES neuroprostheses. A natural way to determine limb state is to measure and decode the activity of primary afferent neurons in the limb. The feasibility of doing so has been demonstrated by [1] and [2]. Despite positive results, some drawbacks in these works are associated with the application of reverse regression techniques for decoding the afferent neuronal signals. Decoding methods that are based on direct regression are now favored over reverse regression for decoding neural responses in higher regions in the central nervous system [3]. In this paper, we apply a direct regression approach to decode the movement of the hind limb of a cat from a population of primary afferent neurons. We show that this approach is more principled, more efficient, and more generalizable than reverse regression.

  12. rMPI : increasing fault resiliency in a message-passing environment.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Riesen, Rolf (IBM Research, Ireland); Brightwell, Ronald Brian

    2011-04-01

    As High-End Computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are unsuitable at these scale due to excessive overheads predicted to more than double an applications time to solution. Redundant computation, long used in distributed and mission critical systems, has been suggested as an alternative to checkpoint-restart on its own. In this paper we describe the rMPI library which enables portable and transparent redundant computation for MPI applications. We detail the design of the library as well as two replica consistency protocols, outline the overheads of this library at scale on a number of real-world applications, and finally outline the significant increase in an applications time to solution at extreme scale as well as show the scenarios in which redundant computation makes sense.

  13. Ein effizientes Message-Passing-Interface (MPI) für HiPPI

    OpenAIRE

    Beisel, Thomas

    1996-01-01

    In dieser Programmbeschreibung wird auf den internen Aufbau von MPI eingegangen und begründet, warum eine Erweiterung der Funktionalitäten erforderlich ist. Der Lösungsansatzes zur Erweiterung von MPI wird vorgestellt.

  14. Scalable detection of statistically significant communities and hierarchies: message-passing for modularity

    CERN Document Server

    Zhang, Pan

    2014-01-01

    Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions with almost the same modularity that are poorly correlated to each other; it can also overfit, producing illusory "communities" in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian, and computing the marginals of the resulting Gibbs distribution. If we assign each node to its most-likely community under these marginals, we claim that, unlike the ground state, the resulting partition is a good measure of statistically-significant community structure. We propose an efficient Belief Propagation (BP) algorithm to compute these marginals. In random networks with no true communities, the system has two phases as we vary the temperature: a paramagnetic phase where all marginals are equal, and a spin glass phase where BP fails to converge. In networks with real community structure, there is an additional retrieval phase where BP converges, and ...

  15. A model based message passing approach for flexible and scalable home automation controllers

    Energy Technology Data Exchange (ETDEWEB)

    Bienhaus, D. [INNIAS GmbH und Co. KG, Frankenberg (Germany); David, K.; Klein, N.; Kroll, D. [ComTec Kassel Univ., SE Kassel Univ. (Germany); Heerdegen, F.; Jubeh, R.; Zuendorf, A. [Kassel Univ. (Germany). FG Software Engineering; Hofmann, J. [BSC Computer GmbH, Allendorf (Germany)

    2012-07-01

    There is a large variety of home automation systems that are largely proprietary systems from different vendors. In addition, the configuration and administration of home automation systems is frequently a very complex task especially, if more complex functionality shall be achieved. Therefore, an open model for home automation was developed that is especially designed for easy integration of various home automation systems. This solution also provides a simple modeling approach that is inspired by typical home automation components like switches, timers, etc. In addition, a model based technology to achieve rich functionality and usability was implemented. (orig.)

  16. Scalable High Performance Message Passing over InfiniBand for Open MPI

    Energy Technology Data Exchange (ETDEWEB)

    Friedley, A; Hoefler, T; Leininger, M L; Lumsdaine, A

    2007-10-24

    InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless unreliable datagram transport (UD), which allows for near-constant resource usage and initialization overhead as the process count increases. This paper describes a UD-based implementation for IB in Open MPI as a scalable alternative to existing RC-based schemes. We use the software reliability capabilities of Open MPI to provide the guaranteed delivery semantics required by MPI. Results show that UD not only requires fewer resources at scale, but also allows for shorter MPI startup times. A connectionless model also improves performance for applications that tend to send small messages to many different processes.

  17. rMPI : increasing fault resiliency in a message-passing environment.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Riesen, Rolf (IBM Research, Ireland); Brightwell, Ronald Brian

    2011-04-01

    As High-End Computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are unsuitable at these scale due to excessive overheads predicted to more than double an applications time to solution. Redundant computation, long used in distributed and mission critical systems, has been suggested as an alternative to checkpoint-restart on its own. In this paper we describe the rMPI library which enables portable and transparent redundant computation for MPI applications. We detail the design of the library as well as two replica consistency protocols, outline the overheads of this library at scale on a number of real-world applications, and finally outline the significant increase in an applications time to solution at extreme scale as well as show the scenarios in which redundant computation makes sense.

  18. An Efficient Clustering Technique for Message Passing between Data Points using Affinity Propagation

    Directory of Open Access Journals (Sweden)

    D. NAPOLEON,

    2011-01-01

    Full Text Available A wide range of clustering algorithms is available in literature and still an open area for researcher’s k-means algorithm is one of the basic and most simple partitioning clustering technique is given by Macqueen in 1967. A new clustering algorithm used in this paper is affinity propagation. The number of cluster k has been supplied by the user and the Affinity propagation found clusters with much lower error than other methods, and it did so in less than one-hundredth the amount of time between data point. In this paper we make analysis on cluster algorithm k-means, efficient k-means, and affinity propagation with colon dataset. And the result of affinity ropagation shows much lower error when compare with other algorithm and the average accuracy is good.

  19. Shared memory and message passing revisited in the many-core era

    CERN Document Server

    CERN. Geneva

    2016-01-01

    In the 70s, Edsgar Dijkstra, Per Brinch Hansen and C.A.R Hoare introduced the fundamental concepts for concurrent computing. It was clear that concrete communication mechanisms were required in order to achieve effective concurrency. Whether you're developing a multithreaded program running on a single node, or a distributed system spanning over hundreds of thousands cores, the choice of the communication mechanism for your system must be done intelligently because of the implicit programmability, performance and scalability trade-offs. With the emergence of many-core computing architectures many assumptions may not be true anymore. In this talk we will try to provide insight on the characteristics of these communication models by providing basic theoretical background and then focus on concrete practical examples based on indicative use case scenarios. The case studies of this presentation cover popular programming models, operating systems and concurrency frameworks in the context of many-core processors.

  20. Techniques for Enabling Highly Efficient Message Passing on Many-Core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Si, Min; Balaji, Pavan; Ishikawa, Yutaka

    2015-01-01

    Many-core architecture provides a massively parallel environment with dozens of cores and hundreds of hardware threads. Scientific application programmers are increasingly looking at ways to utilize such large numbers of lightweight cores for various programming models. Efficiently executing these models on massively parallel many-core environments is not easy, however and performance may be degraded in various ways. The first author's doctoral research focuses on exploiting the capabilities of many-core architectures on widely used MPI implementations. While application programmers have studied several approaches to achieve better parallelism and resource sharing, many of those approaches still face communication problems that degrade performance. In the thesis, we investigate the characteristics of MPI on such massively threaded architectures and propose two efficient strategies -- a multi-threaded MPI approach and a process-based asynchronous model -- to optimize MPI communication for modern scientific applications.

  1. Run-time scheduling and execution of loops on message passing machines

    Science.gov (United States)

    Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry

    1990-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  2. Message-Passing Receiver for OFDM Systems over Highly Delay-Dispersive Channels

    DEFF Research Database (Denmark)

    Barbu, Oana-Elena; Manchón, Carles Navarro; Rom, Christian

    2017-01-01

    Propagation channels with maximum excess delay exceeding the duration of the cyclic prefix (CP) in OFDM systems cause intercarrier and intersymbol interference which, unless accounted for, degrade the receiver performance. Using tools from Bayesian inference and sparse signal reconstruction, we...... and future wireless communications systems. By enabling the OFDM receiver experiencing these harsh conditions to locally cancel the interference, our design circumvents the spectral efficiency loss incurred by extending the CP duration, otherwise a straightforward solution. Furthermore, it sets the premises...

  3. Lattice Sequential Decoder for Coded MIMO Channel: Performance and Complexity Analysis

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    In this paper, the performance limit of lattice sequential decoder for lattice space-time coded MIMO channel is analysed. We determine the rates achievable by lattice coding and sequential decoding applied to such channel. The diversity-multiplexing tradeoff (DMT) under lattice sequential decoding is derived as a function of its parameter---the bias term. The bias parameter is critical for controlling the amount of computations required at the decoding stage. Achieving low decoding complexity requires increasing the value of the bias term. However, this is done at the expense of losing the optimal tradeoff of the channel. We show how such a decoder can bridge the gap between lattice decoder and low complexity decoders. Moreover, the computational complexity of lattice sequential decoder is analysed. Specifically, we derive the tail distribution of the decoder's computational complexity in the high signal-to-noise ratio regime. Similar to the conventional sequential decoder used in discrete memoryless channel,...

  4. Implementation of Huffman Decoder on Fpga

    Directory of Open Access Journals (Sweden)

    Safia Amir Dahri

    2016-01-01

    Full Text Available Lossless data compression algorithm is most widely used algorithm in data transmission, reception and storage systems in order to increase data rate, speed and save lots of space on storage devices. Now-a-days, different algorithms are implemented in hardware to achieve benefits of hardware realizations. Hardware implementation of algorithms, digital signal processing algorithms and filter realization is done on programmable devices i.e. FPGA. In lossless data compression algorithms, Huffman algorithm is most widely used because of its variable length coding features and many other benefits. Huffman algorithms are used in many applications in software form, e.g. Zip and Unzip, communication, etc. In this paper, Huffman algorithm is implemented on Xilinx Spartan 3E board. This FPGA is programmed by Xilinx tool, Xilinx ISE 8.2i. The program is written in VHDL and text data is decoded by a Huffman algorithm on Hardware board which was previously encoded by Huffman algorithm. In order to visualize the output clearly in waveforms, the same code is simulated on ModelSim v6.4. Huffman decoder is also implemented in the MATLAB for verification of operation. The FPGA is a configurable device which is more efficient in all aspects. Text application, image processing, video streaming and in many other applications Huffman algorithms are implemented.

  5. SDRAM bus schedule of HDTV video decoder

    Science.gov (United States)

    Wang, Hui; He, Yan L.; Yu, Lu

    2001-12-01

    In this paper, a time division multiplexed task scheduling (TDM) is designed for HDTV video decoder is proposed. There are three tasks: to fetch decoded data from SDRAM for displaying (DIS), read the reference data from SDRAM for motion compensating (REF) and write the motion compensated data back to SDRAM (WB) on the bus. The proposed schedule is based on the novel 4 banks interlaced SDRAM storage structure which results in less overhead on read/write time. Two SDRAM of 64M bits (4Bank×512K×32bit) are used. Compared with two banks, the four banks storage strategy read/write data with 45% less time. Therefore the process data rates for those three tasks are reduced. TDM is developed by round robin scheduling and fixed slot allocating. There are both MB slot and task slot. As a result the conflicts on bus are avoided, and the buffer size is reduced 48% compared with the priority bus scheduling. Moreover, there is a compacted bus schedule for the worst case of stuffing owning to the reduced executing time on tasks. The size of buffer is reduced and the control logic is simplified.

  6. Decoding Humor Experiences from Brain Activity of People Viewing Comedy Movies

    Science.gov (United States)

    Sawahata, Yasuhito; Komine, Kazuteru; Morita, Toshiya; Hiruma, Nobuyuki

    2013-01-01

    Humans naturally have a sense of humor. Experiencing humor not only encourages social interactions, but also produces positive physiological effects on the human body, such as lowering blood pressure. Recent neuro-imaging studies have shown evidence for distinct mental state changes at work in people experiencing humor. However, the temporal characteristics of these changes remain elusive. In this paper, we objectively measured humor-related mental states from single-trial functional magnetic resonance imaging (fMRI) data obtained while subjects viewed comedy TV programs. Measured fMRI data were labeled on the basis of the lag before or after the viewer’s perception of humor (humor onset) determined by the viewer-reported humor experiences during the fMRI scans. We trained multiple binary classifiers, or decoders, to distinguish between fMRI data obtained at each lag from ones obtained during a neutral state in which subjects were not experiencing humor. As a result, in the right dorsolateral prefrontal cortex and the right temporal area, the decoders showed significant classification accuracies even at two seconds ahead of the humor onsets. Furthermore, given a time series of fMRI data obtained during movie viewing, we found that the decoders with significant performance were also able to predict the upcoming humor events on a volume-by-volume basis. Taking into account the hemodynamic delay, our results suggest that the upcoming humor events are encoded in specific brain areas up to about five seconds before the awareness of experiencing humor. Our results provide evidence that there exists a mental state lasting for a few seconds before actual humor perception, as if a viewer is expecting the future humorous events. PMID:24324656

  7. Decoding humor experiences from brain activity of people viewing comedy movies.

    Directory of Open Access Journals (Sweden)

    Yasuhito Sawahata

    Full Text Available Humans naturally have a sense of humor. Experiencing humor not only encourages social interactions, but also produces positive physiological effects on the human body, such as lowering blood pressure. Recent neuro-imaging studies have shown evidence for distinct mental state changes at work in people experiencing humor. However, the temporal characteristics of these changes remain elusive. In this paper, we objectively measured humor-related mental states from single-trial functional magnetic resonance imaging (fMRI data obtained while subjects viewed comedy TV programs. Measured fMRI data were labeled on the basis of the lag before or after the viewer's perception of humor (humor onset determined by the viewer-reported humor experiences during the fMRI scans. We trained multiple binary classifiers, or decoders, to distinguish between fMRI data obtained at each lag from ones obtained during a neutral state in which subjects were not experiencing humor. As a result, in the right dorsolateral prefrontal cortex and the right temporal area, the decoders showed significant classification accuracies even at two seconds ahead of the humor onsets. Furthermore, given a time series of fMRI data obtained during movie viewing, we found that the decoders with significant performance were also able to predict the upcoming humor events on a volume-by-volume basis. Taking into account the hemodynamic delay, our results suggest that the upcoming humor events are encoded in specific brain areas up to about five seconds before the awareness of experiencing humor. Our results provide evidence that there exists a mental state lasting for a few seconds before actual humor perception, as if a viewer is expecting the future humorous events.

  8. The Differential Contributions of Auditory-Verbal and Visuospatial Working Memory on Decoding Skills in Children Who Are Poor Decoders

    Science.gov (United States)

    Squires, Katie Ellen

    2013-01-01

    This study investigated the differential contribution of auditory-verbal and visuospatial working memory (WM) on decoding skills in second- and fifth-grade children identified with poor decoding. Thirty-two second-grade students and 22 fifth-grade students completed measures that assessed simple and complex auditory-verbal and visuospatial memory,…

  9. The Modified Minsum Decoding Algorithm of LDPC Code in the Simplified Difference-domain%基于修正最小和的简化差分域LDPC译码算法

    Institute of Scientific and Technical Information of China (English)

    高兴龙; 王中训; 颜飞; 殷熔煌; 陈明阳

    2013-01-01

    The modified minsum decoding algorithm of LDPC (Low-Density Parity-Check) code in the simplified difference-domain on the basis of detailed analysis of LDPC decoding algorithm in difference-domain was proposed.The simulation indicates that the proposed decoding algorithm offers almost no performance degradation compared with the BP(Belief Propagation) decoding algorithm in log-domain and the decoding algorithm in difference-domain and offers better performance than minsum decoding algorithm in log-domain and greatly reduces the computation complexity in AWGN(Additive White Gaussian Noise) channel and under BPSK(Binary Phase Shift Keying) modulation.%在深刻分析差分域LDPC译码算法的基础上,提出了基于修正最小和的简化差分域LDPC译码算法.仿真结果表明,在加性高斯白噪声信道环境下,BPSK调制时,提出的译码算法在极大地降低计算复杂度的情况下,性能明显优于对数域的最小和算法并且相比对数域置信算法和差分域译码算法几乎没有性能损失.

  10. Binary mask programmable hologram.

    Science.gov (United States)

    Tsang, P W M; Poon, T-C; Zhou, Changhe; Cheung, K W K

    2012-11-19

    We report, for the first time, the concept and generation of a novel Fresnel hologram called the digital binary mask programmable hologram (BMPH). A BMPH is comprised of a static, high resolution binary grating that is overlaid with a lower resolution binary mask. The reconstructed image of the BMPH can be programmed to approximate a target image (including both intensity and depth information) by configuring the pattern of the binary mask with a simple genetic algorithm (SGA). As the low resolution binary mask can be realized with less stringent display technology, our method enables the development of simple and economical holographic video display.

  11. Performance comparison of binary modulation schemes for visible light communication

    KAUST Repository

    Park, Kihong

    2015-09-11

    In this paper, we investigate the power spectral density of several binary modulation schemes including variable on-off keying, variable pulse position modulation, and pulse dual slope modulation which were previously proposed for visible light communication with dimming control. We also propose a novel slope-based modulation called differential chip slope modulation (DCSM) and develop a chip-based hard-decision receiver to demodulate the resulting signal, detect the chip sequence, and decode the input bit sequence. We show that the DCSM scheme can exploit spectrum density more efficiently than the reference schemes while providing an error rate performance comparable to them. © 2015 IEEE.

  12. Evaluation of channel coding and decoding algorithms using discrete chaotic maps.

    Science.gov (United States)

    Escribano, Francisco J; López, Luis; Sanjuán, Miguel A F

    2006-03-01

    In this paper we address the design of channel encoding algorithms using one-dimensional nonlinear chaotic maps starting from the desired invariant probability density function (pdf) of the data sent to the channel. We show that, with some simple changes, it is straightforward to make use of a known encoding framework based upon the Bernoulli shift map and adapt it readily to carry the information bit sequence produced by a binary source in a practical way. On the decoder side, we introduce four already known decoding algorithms and compare the resulting performance of the corresponding transmitters. The performance in terms of the bit error rate shows that the most important design clue is related not only to the pdf of the data produced by the chosen discrete map: the own dynamics of the maps is also of the highest importance and has to be taken into account when designing the whole transmitting and receiving system. We also show that a good performance in such systems needs the extensive use of all the evidence stored in the whole chaotic sequence.

  13. On Pseudocodewords and Improved Union Bound of Linear Programming Decoding of HDPC Codes

    CERN Document Server

    Gidon, Ohad

    2012-01-01

    In this paper, we present an improved union bound on the Linear Programming (LP) decoding performance of the binary linear codes transmitted over an additive white Gaussian noise channels. The bounding technique is based on the second-order of Bonferroni-type inequality in probability theory, and it is minimized by Prim's minimum spanning tree algorithm. The bound calculation needs the fundamental cone generators of a given parity-check matrix rather than only their weight spectrum, but involves relatively low computational complexity. It is targeted to high-density parity-check codes, where the number of their generators is extremely large and these generators are spread densely in the Euclidean space. We explore the generator density and make a comparison between different parity-check matrix representations. That density effects on the improvement of the proposed bound over the conventional LP union bound. The paper also presents a complete pseudo-weight distribution of the fundamental cone generators for ...

  14. Design & Implementation of 4 Bit Galois Encoder and Decoder on FPGA

    Directory of Open Access Journals (Sweden)

    Dr.Ravi Shankar Mishra

    2011-07-01

    Full Text Available Galois Field Theory deals with numbers that are binary in nature, have the properties of a mathematical “field,”and are finite in scope. Galois operations match those of regular mathematics. Addition (Ex-Or and multiplication are common Galois operations, and logarithms, particularly, are handy for checking multiplication results Galois Field multipliers have been used both for coding theory and for cryptography. In this paper we present GF (2m Galois field encoder & decoder its verification on FPGA Spartan xc3s50-5pq208using the NIST chosen irreducible polynomial. A complete verification of multiplication is simulated on ModelSim 10.0 a & implemented on FPGA xc3s50-5pq208 will be presented to assure its validity.

  15. Reed-Solomon Turbo Product Codes for Optical Communications: From Code Optimization to Decoder Design

    Directory of Open Access Journals (Sweden)

    Le Bidan Raphaël

    2008-01-01

    Full Text Available Abstract Turbo product codes (TPCs are an attractive solution to improve link budgets and reduce systems costs by relaxing the requirements on expensive optical devices in high capacity optical transport systems. In this paper, we investigate the use of Reed-Solomon (RS turbo product codes for 40 Gbps transmission over optical transport networks and 10 Gbps transmission over passive optical networks. An algorithmic study is first performed in order to design RS TPCs that are compatible with the performance requirements imposed by the two applications. Then, a novel ultrahigh-speed parallel architecture for turbo decoding of product codes is described. A comparison with binary Bose-Chaudhuri-Hocquenghem (BCH TPCs is performed. The results show that high-rate RS TPCs offer a better complexity/performance tradeoff than BCH TPCs for low-cost Gbps fiber optic communications.

  16. Reed-Solomon Turbo Product Codes for Optical Communications: From Code Optimization to Decoder Design

    Directory of Open Access Journals (Sweden)

    Ramesh Pyndiah

    2008-05-01

    Full Text Available Turbo product codes (TPCs are an attractive solution to improve link budgets and reduce systems costs by relaxing the requirements on expensive optical devices in high capacity optical transport systems. In this paper, we investigate the use of Reed-Solomon (RS turbo product codes for 40 Gbps transmission over optical transport networks and 10 Gbps transmission over passive optical networks. An algorithmic study is first performed in order to design RS TPCs that are compatible with the performance requirements imposed by the two applications. Then, a novel ultrahigh-speed parallel architecture for turbo decoding of product codes is described. A comparison with binary Bose-Chaudhuri-Hocquenghem (BCH TPCs is performed. The results show that high-rate RS TPCs offer a better complexity/performance tradeoff than BCH TPCs for low-cost Gbps fiber optic communications.

  17. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  18. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  19. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  20. Approximate Decoding Approaches for Network Coded Correlated Data

    CERN Document Server

    Park, Hyunggon; Frossard, Pascal

    2011-01-01

    This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad-hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth bottlenecks. We first show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the perfo...

  1. On Multiple Decoding Attempts for Reed-Solomon Codes

    CERN Document Server

    Nguyen, Phong S; Narayanan, Krishna R

    2010-01-01

    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on the idea of using multiple trials of a simple RS decoding algorithm in combination with successively erasing or flipping a set of symbols or bits in each trial. In this paper, we present an framework based on rate-distortion (RD) theory to analyze such multiple-decoding algorithms for RS codes. By defining an appropriate distortion measure between an error pattern and an erasure pattern, it is shown that, for a single errors-and-erasures decoding trial, the condition for successful decoding is equivalent to the condition that the distortion is smaller than a fixed threshold. Finding the best set of erasure patterns for multiple decoding trials then turns out to be a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. We also consider an a...

  2. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  3. Decoding Generalized Concatenated Codes Using Interleaved Reed-Solomon Codes

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor

    2008-01-01

    Generalized Concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code. As outer codes, we assume the most frequently used Reed-Solomon codes; as inner code, we assume some linear block code which can be decoded up to half its minimum distance. Decoding up to half the minimum distance of Generalized Concatenated codes is classically achieved by the Blokh-Zyablov-Dumer algorithm, which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error/erasure decoder with a varying number of erasures determined by a set of pre-calculated thresholds. In this paper, a modified version of the Blokh-Zyablov-Dumer algorithm is proposed, which exploits the fact that a number of outer Reed-Solomon codes with average minimum distance d can be grouped into one single Interleaved Reed-Solomon code which can be decoded beyond d/2. This allows to skip a number of decoding iterations on the one...

  4. Coding and Decoding for the Dynamic Decode and Forward Relay Protocol

    CERN Document Server

    Kumar, K Raj

    2008-01-01

    We study the Dynamic Decode and Forward (DDF) protocol for a single half-duplex relay, single-antenna channel with quasi-static fading. The DDF protocol is well-known and has been analyzed in terms of the Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We characterize the finite block length DMT and give new explicit code constructions. The finite block length analysis illuminates a few key aspects that have been neglected in the previous literature: 1) we show that one dominating cause of degradation with respect to the infinite block length regime is the event of decoding error at the relay; 2) we explicitly take into account the fact that the destination does not generally know a priori the relay decision time at which the relay switches from listening to transmit mode. Both the above problems can be tackled by a careful design of the decoding algorithm. In particular, we introduce a decision rejection criterion at the relay based on Forney's decision rule (a variant of the Neyman...

  5. Decoding the mechanisms of Antikythera astronomical device

    CERN Document Server

    Lin, Jian-Liang

    2016-01-01

    This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...

  6. Interference Alignment for Clustered Multicell Joint Decoding

    CERN Document Server

    Chatzinotas, Symeon

    2010-01-01

    Multicell joint processing has been proven to be very efficient in overcoming the interference-limited nature of the cellular paradigm. However, for reasons of practical implementation global multicell joint decoding is not feasible and thus clusters of cooperating Base Stations have to be considered. In this context, intercluster interference has to be mitigated in order to harvest the full potential of multicell joint processing. In this paper, four scenarios of intercluster interference are investigated, namely a) global multicell joint processing, b) interference alignment, c) resource division multiple access and d) cochannel interference allowance. Each scenario is modelled and analyzed using the per-cell ergodic sum-rate capacity as a figure of merit. In this process, a number of theorems are derived for analytically expressing the asymptotic eigenvalue distributions of the channel covariance matrices. The analysis is based on principles from Free Probability theory and especially properties in the R a...

  7. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  8. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  9. Turbo decoder architecture for beyond-4G applications

    CERN Document Server

    Wong, Cheng-Chi

    2013-01-01

    This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec

  10. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  11. Fast-Group-Decodable STBCs via Codes over GF(4)

    CERN Document Server

    Prasad, N Lakshmi

    2010-01-01

    In this paper we construct low decoding complexity STBCs by using the Pauli matrices as linear dispersion matrices. In this case the Hurwitz-Radon orthogonality condition is shown to be easily checked by transferring the problem to $\\mathbb{F}_4$ domain. The problem of constructing low decoding complexity STBCs is shown to be equivalent to finding certain codes over $\\mathbb{F}_4$. It is shown that almost all known low complexity STBCs can be obtained by this approach. New codes are given that have the least known decoding complexity in particular ranges of rate.

  12. Reliable communication over non-binary insertion/deletion channels

    CERN Document Server

    Yazdani, Raman

    2012-01-01

    We consider the problem of reliable communication over non-binary insertion/deletion channels where symbols are randomly deleted from or inserted in the transmitted sequence and all symbols are corrupted by additive white Gaussian noise. To this end, we utilize the inherent redundancy achievable in non-binary symbol sets by first expanding the symbol set and then allocating part of the bits associated with each symbol to watermark symbols. The watermark sequence, known at the receiver, is then used by a forward-backward algorithm to provide soft information for an outer code which decodes the transmitted sequence. Through numerical results and discussions, we evaluate the performance of the proposed solution and show that it leads to significant system ability to detect and correct insertions/deletions. We also provide estimates of the maximum achievable information rates of the system, compare them with the available bounds, and construct practical codes capable of approaching these limits.

  13. Efficient processing of MPEG-21 metadata in the binary domain

    Science.gov (United States)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  14. Optimized puncturing distributions for irregular non-binary LDPC codes

    CERN Document Server

    Gorgoglione, Matteo; Declercq, David

    2010-01-01

    In this paper we design non-uniform bit-wise puncturing distributions for irregular non-binary LDPC (NB-LDPC) codes. The puncturing distributions are optimized by minimizing the decoding threshold of the punctured LDPC code, the threshold being computed with a Monte-Carlo implementation of Density Evolution. First, we show that Density Evolution computed with Monte-Carlo simulations provides accurate (very close) and precise (small variance) estimates of NB-LDPC code ensemble thresholds. Based on the proposed method, we analyze several puncturing distributions for regular and semi-regular codes, obtained either by clustering punctured bits, or spreading them over the symbol-nodes of the Tanner graph. Finally, optimized puncturing distributions for non-binary LDPC codes with small maximum degree are presented, which exhibit a gap between 0.2 and 0.5 dB to the channel capacity, for punctured rates varying from 0.5 to 0.9.

  15. 一种CABAC解码引擎的芯片实现%An implementation of CABAC decoder

    Institute of Scientific and Technical Information of China (English)

    朱敏; 刘雷波; 王星; 殷崇勇; 尹首一; 魏少军

    2013-01-01

    CABAC (Context-based Adaptive Binary Arithmetic coding) is an efficient entropy codec which adopt by H.264.It has a better compression rate than others,such like CAVLC,but harder to implement.This paper gives out an implementation of one-cycle CABAC decoder base on the previous work[1].The critical paths are improved by table replacing,branch prediction,logic moving and inverter optimization; the area is also reduced by registers redesigning.The decoder is typed out,test chip achieves a decoding rate of 250Mbin/s@l.03mW and reduces 46% area consuming comparing with the previous implementation.%CABAC(Context-based Adaptive Binary Arithmetic coding)是H.264中所采用的一种高效熵编码,压缩率高,但结构复杂,硬件实现难度大.本文在P.Zhang 2008年的工作[1]基础上提出一种单周期CABAC解码引擎的优化实现方法,通过查表替换、分支预测、逻辑调整、反相器优化等关键路径优化方法和寄存器精简等面积优化方法进一步提高了解码性能.经过芯片验证,CABAC解码引擎性能提高到250Mbps,面积减少46%,峰值工作情形下功耗1.03mW,满足下一代视频编解码协议(QFHD)的需求.

  16. Grid search in stellar parameters: a software for spectrum analysis of single stars and binary systems

    Science.gov (United States)

    Tkachenko, A.

    2015-09-01

    Context. The currently operating space missions, as well as those that will be launched in the near future, will deliver high-quality data for millions of stellar objects. Since the majority of stellar astrophysical applications still (at least partly) rely on spectroscopic data, an efficient tool for the analysis of medium- to high-resolution spectroscopy is needed. Aims: We aim at developing an efficient software package for the analysis of medium- to high-resolution spectroscopy of single stars and those in binary systems. The major requirements are that the code should have a high performance, represent the state-of-the-art analysis tool, and provide accurate determinations of atmospheric parameters and chemical compositions for different types of stars. Methods: We use the method of atmosphere models and spectrum synthesis, which is one of the most commonly used approaches for the analysis of stellar spectra. Our Grid Search in Stellar Parameters (gssp) code makes use of the Message Passing Interface (OpenMPI) implementation, which makes it possible to run in parallel mode. The method is first tested on the simulated data and is then applied to the spectra of real stellar objects. Results: The majority of test runs on the simulated data were successful in that we were able to recover the initially assumed sets of atmospheric parameters. We experimentally find the limits in signal-to-noise ratios of the input spectra, below which the final set of parameters is significantly affected by the noise. Application of the gssp package to the spectra of three Kepler stars, KIC 11285625, KIC 6352430, and KIC 4931738, was also largely successful. We found an overall agreement of the final sets of the fundamental parameters with the original studies. For KIC 6352430, we found that dependence of the light dilution factor on wavelength cannot be ignored, as it has a significant impact on the determination of the atmospheric parameters of this binary system. Conclusions: The

  17. Interacting binary stars

    CERN Document Server

    Sahade, Jorge; Ter Haar, D

    1978-01-01

    Interacting Binary Stars deals with the development, ideas, and problems in the study of interacting binary stars. The book consolidates the information that is scattered over many publications and papers and gives an account of important discoveries with relevant historical background. Chapters are devoted to the presentation and discussion of the different facets of the field, such as historical account of the development in the field of study of binary stars; the Roche equipotential surfaces; methods and techniques in space astronomy; and enumeration of binary star systems that are studied

  18. Interpolating and filtering decoding algorithm for convolution codes

    Directory of Open Access Journals (Sweden)

    O. O. Shpylka

    2010-01-01

    Full Text Available There has been synthesized interpolating and filtering decoding algorithm for convolution codes on maximum of a posteriori probability criterion, in which combined filtering coder state and interpolation of information signs on sliding interval are processed

  19. Evolution of Neural Computations: Mantis Shrimp and Human Color Decoding

    Directory of Open Access Journals (Sweden)

    Qasim Zaidi

    2014-10-01

    Full Text Available Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  20. Evolution of neural computations: Mantis shrimp and human color decoding.

    Science.gov (United States)

    Zaidi, Qasim; Marshall, Justin; Thoen, Hanne; Conway, Bevil R

    2014-01-01

    Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  1. Construction and decoding of a class of algebraic geometry codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1989-01-01

    A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result is a decod...... is a decoding algorithm which turns out to be a generalization of the Peterson algorithm for decoding BCH decoder codes......A class of codes derived from algebraic plane curves is constructed. The concepts and results from algebraic geometry that were used are explained in detail; no further knowledge of algebraic geometry is needed. Parameters, generator and parity-check matrices are given. The main result...

  2. Decoding sound level in the marmoset primary auditory cortex.

    Science.gov (United States)

    Sun, Wensheng; Marongelli, Ellisha N; Watkins, Paul V; Barbour, Dennis L

    2017-07-12

    Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons. Copyright © 2016, Journal of Neurophysiology.

  3. Multiresolutional encoding and decoding in embedded image and video coders

    Science.gov (United States)

    Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.

    1998-07-01

    We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.

  4. On Complexity, Energy- and Implementation-Efficiency of Channel Decoders

    CERN Document Server

    Kienle, Frank; Meyr, Heinrich

    2010-01-01

    Future wireless communication systems require efficient and flexible baseband receivers. Meaningful efficiency metrics are key for design space exploration to quantify the algorithmic and the implementation complexity of a receiver. Most of the current established efficiency metrics are based on counting operations, thus neglecting important issues like data and storage complexity. In this paper we introduce suitable energy and area efficiency metrics which resolve the afore-mentioned disadvantages. These are decoded information bit per energy and throughput per area unit. Efficiency metrics are assessed by various implementations of turbo decoders, LDPC decoders and convolutional decoders. New exploration methodologies are presented, which permit an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs. These exploration methodologies are based on efficiency trajectories rather than a single snapshot metric as done in state-of-the-art approaches.

  5. Tracing Precept against Self-Protective Tortious Decoder

    Institute of Scientific and Technical Information of China (English)

    Jie Tian; Xin-Fang Zhang; Yi-Lin Song; Wei Xiang

    2007-01-01

    Traceability precept is a broadcast encryption technique that content suppliers can trace malicious authorized users who leak the decryption key to an unauthorized user. To protect the data from eavesdropping, the content supplier encrypts the data and broadcast the cryptograph that only its subscribers can decrypt. However, a traitor may clone his decoder and sell the pirate decoders for profits. The traitor can modify the private key and the decryption program inside the pirate decoder to avoid divulging his identity. Furthermore, some traitors may fabricate a new legal private key together that cannot be traced to the creators. So in this paper, a renewed precept is proposed to achieve both revocation at a different level of capacity in each distribution and black-box tracing against self-protective pirate decoders. The rigorous mathematical deduction shows that our algorithm possess security property.

  6. Improved List Sphere Decoder for Multiple Antenna Systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An improved list sphere decoder (ILSD) is proposed based on the conventional list sphere decoder (LSD) and the reduced-complexity maximum likelihood sphere-decoding algorithm. Unlike the conventional LSD with fixed initial radius, the ILSD adopts an adaptive radius to accelerate the list construction. Characterized by low-complexity and radius-insensitivity, the proposed algorithm makes iterative joint detection and decoding more realizable in multiple-antenna systems. Simulation results show that computational savings of ILSD over LSD are more apparent with more transmit antennas or larger constellations, and with no performance degradation. Because the complexity of the ILSD algorithm almost keeps invariant with the increasing of initial radius, the BER performance can be improved by selecting a sufficiently large radius.

  7. Low ML Decoding Complexity STBCs via Codes over GF(4)

    CERN Document Server

    Natarajan, Lakshmi Prasad

    2010-01-01

    In this paper, we give a new framework for constructing low ML decoding complexity Space-Time Block Codes (STBCs) using codes over the finite field $\\mathbb{F}_4$. Almost all known low ML decoding complexity STBCs can be obtained via this approach. New full-diversity STBCs with low ML decoding complexity and cubic shaping property are constructed, via codes over $\\mathbb{F}_4$, for number of transmit antennas \\mbox{$N=2^m$}, \\mbox{$m \\geq 1$}, and rates \\mbox{$R>1$} complex symbols per channel use. When \\mbox{$R=N$}, the new STBCs are information-lossless as well. The new class of STBCs have the least known ML decoding complexity among all the codes available in the literature for a large set of \\mbox{$(N,R)$} pairs.

  8. Decoding of visual attention from LFP signals of macaque MT.

    Science.gov (United States)

    Esghaei, Moein; Daliri, Mohammad Reza

    2014-01-01

    The local field potential (LFP) has recently been widely used in brain computer interfaces (BCI). Here we used power of LFP recorded from area MT of a macaque monkey to decode where the animal covertly attended. Support vector machines (SVM) were used to learn the pattern of power at different frequencies for attention to two possible positions. We found that LFP power at both low (<9 Hz) and high (31-120 Hz) frequencies contains sufficient information to decode the focus of attention. Highest decoding performance was found for gamma frequencies (31-120 Hz) and reached 82%. In contrast low frequencies (<9 Hz) could help the classifier reach a higher decoding performance with a smaller amount of training data. Consequently, we suggest that low frequency LFP can provide fast but coarse information regarding the focus of attention, while higher frequencies of the LFP deliver more accurate but less timely information about the focus of attention.

  9. Decoding of visual attention from LFP signals of macaque MT.

    Directory of Open Access Journals (Sweden)

    Moein Esghaei

    Full Text Available The local field potential (LFP has recently been widely used in brain computer interfaces (BCI. Here we used power of LFP recorded from area MT of a macaque monkey to decode where the animal covertly attended. Support vector machines (SVM were used to learn the pattern of power at different frequencies for attention to two possible positions. We found that LFP power at both low (<9 Hz and high (31-120 Hz frequencies contains sufficient information to decode the focus of attention. Highest decoding performance was found for gamma frequencies (31-120 Hz and reached 82%. In contrast low frequencies (<9 Hz could help the classifier reach a higher decoding performance with a smaller amount of training data. Consequently, we suggest that low frequency LFP can provide fast but coarse information regarding the focus of attention, while higher frequencies of the LFP deliver more accurate but less timely information about the focus of attention.

  10. Impact of Decoding Work within a Professional Program

    Science.gov (United States)

    Yeo, Michelle; Lafave, Mark; Westbrook, Khatija; McAllister, Jenelle; Valdez, Dennis; Eubank, Breda

    2017-01-01

    This chapter demonstrates how Decoding work can be used productively within a curriculum change process to help make design decisions based on a more nuanced understanding of student learning and the relationship of a professional program to the field.

  11. Learning from Decoding across Disciplines and within Communities of Practice

    Science.gov (United States)

    Miller-Young, Janice; Boman, Jennifer

    2017-01-01

    This final chapter synthesizes the findings and implications derived from applying the Decoding the Disciplines model across disciplines and within communities of practice. We make practical suggestions for teachers and researchers who wish to apply and extend this work.

  12. VLSI architecture for a Reed-Solomon decoder

    Science.gov (United States)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor)

    1992-01-01

    A basic single-chip building block for a Reed-Solomon (RS) decoder system is partitioned into a plurality of sections, the first of which consists of a plurality of syndrome subcells each of which contains identical standard-basis finite-field multipliers that are programmable between 10 and 8 bit operation. A desired number of basic building blocks may be assembled to provide a RS decoder of any syndrome subcell size that is programmable between 10 and 8 bit operation.

  13. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    OpenAIRE

    2003-01-01

    We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS) units as the number of trellis states. By replacing each delay (or storage) element in state metrics memory (or path metrics memory) and path memory (or survival memory) with delays, interleaved Viterbi decoder ...

  14. Evolution of neural computations: Mantis shrimp and human color decoding

    OpenAIRE

    Qasim Zaidi; Justin Marshall; Hanne Thoen; Conway, Bevil R.

    2014-01-01

    Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hie...

  15. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  16. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output......We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  17. Recent results in the decoding of Algebraic geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  18. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    Science.gov (United States)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  19. Efficient VLSI architecture of CAVLC decoder with power optimized

    Institute of Scientific and Technical Information of China (English)

    CHEN Guang-hua; HU Deng-ji; ZHANG Jin-yi; ZHENG Wei-feng; ZENG Wei-min

    2009-01-01

    This paper presents an efficient VLSI architecture of the contest-based adaptive variable length code (CAVLC) decoder with power optimized for the H.264/advanced video coding (AVC) standard. In the proposed design, according to the regularity of the codewords, the first one detector is used to solve the low efficiency and high power dissipation problem within the traditional method of table-searching. Considering the relevance of the data used in the process of runbefore's decoding,arithmetic operation is combined with finite state machine (FSM), which achieves higher decoding efficiency. According to the CAVLC decoding flow, clock gating is employed in the module level and the register level respectively, which reduces 43% of the overall dynamic power dissipation. The proposed design can decode every syntax element in one clock cycle. When the proposed design is synthesized at the clock constraint of 100 MHz, the synthesis result shows that the design costs 11 300gates under a 0.25 μm CMOS technology, which meets the demand of real time decoding in the H.264/AVC standard.

  20. Decoding Schemes for FBMC with Single-Delay STTC

    Science.gov (United States)

    Lélé, Chrislin; Le Ruyet, Didier

    2010-12-01

    Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM) with Filter-Bank-based MultiCarrier modulation (FBMC) is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM) with cyclic prefix (CP) for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO) techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC). We extend the decoding process presented earlier in the case of [InlineEquation not available: see fulltext.] transmit antennas to greater values of [InlineEquation not available: see fulltext.]. Then, for [InlineEquation not available: see fulltext.], we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  1. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Chrislin Lélé

    2010-01-01

    Full Text Available Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of Nt=2 transmit antennas to greater values of Nt. Then, for Nt≥2, we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  2. Decoding Schemes for FBMC with Single-Delay STTC

    Directory of Open Access Journals (Sweden)

    Lélé Chrislin

    2010-01-01

    Full Text Available Abstract Orthogonally multiplexed Quadrature Amplitude Modulation (OQAM with Filter-Bank-based MultiCarrier modulation (FBMC is a multicarrier modulation scheme that can be considered an alternative to the conventional orthogonal frequency division multiplexing (OFDM with cyclic prefix (CP for transmission over multipath fading channels. However, as OQAM-based FBMC is based on real orthogonality, transmission over a complex-valued channel makes the decoding process more challenging compared to CP-OFDM case. Moreover, if we apply Multiple Input Multiple Output (MIMO techniques to OQAM-based FBMC, the decoding schemes are different from the ones used in CP-OFDM. In this paper, we consider the combination of OQAM-based FBMC with single-delay Space-Time Trellis Coding (STTC. We extend the decoding process presented earlier in the case of transmit antennas to greater values of . Then, for , we make an analysis of the theoretical and simulation performance of ML and Viterbi decoding. Finally, to improve the performance of this method, we suggest an iterative decoding method. We show that the OQAM-based FBMC iterative decoding scheme can slightly outperform CP-OFDM.

  3. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  4. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan

    2010-08-01

    While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.

  5. Binary colloidal crystals

    NARCIS (Netherlands)

    Christova-Zdravkova, C.G.

    2005-01-01

    Binary crystals are crystals composed of two types of particles having different properties like size, mass density, charge etc. In this thesis several new approaches to make binary crystals of colloidal particles that differ in size, material and charge are reported We found a variety of crystal st

  6. Iterative Decoding of Parallel Concatenated Block Codes and Coset Based MAP Decoding Algorithm for F24 Code

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A multi-dimensional concatenation scheme for block codes is introduced, in which information symbols are interleaved and re-encoded for more than once. It provides a convenient platform to design high performance codes with flexible interleaver size.Coset based MAP soft-in/soft-out decoding algorithms are presented for the F24 code. Simulation results show that the proposed coding scheme can achieve high coding gain with flexible interleaver length and very low decoding complexity.

  7. Singer product apertures-A coded aperture system with a fast decoding algorithm

    Science.gov (United States)

    Byard, Kevin; Shutler, Paul M. E.

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  8. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  9. Efficient blind decoders for additive spread spectrum embedding based data hiding

    Science.gov (United States)

    Valizadeh, Amir; Wang, Z. Jane

    2012-12-01

    This article investigates efficient blind watermark decoding approaches for hidden messages embedded into host images, within the framework of additive spread spectrum (SS) embedding based for data hiding. We study SS embedding in both the discrete cosine transform and the discrete Fourier transform (DFT) domains. The contributions of this article are multiple-fold: first, we show that the conventional SS scheme could not be applied directly into the magnitudes of the DFT, and thus we present a modified SS scheme and the optimal maximum likelihood (ML) decoder based on the Weibull distribution is derived. Secondly, we investigate the improved spread spectrum (ISS) embedding, an improved technique of the traditional additive SS, and propose the modified ISS scheme for information hiding in the magnitudes of the DFT coefficients and the optimal ML decoders for ISS embedding are derived. We also provide thorough theoretical error probability analysis for the aforementioned decoders. Thirdly, sub-optimal decoders, including local optimum decoder (LOD), generalized maximum likelihood (GML) decoder, and linear minimum mean square error (LMMSE) decoder, are investigated to reduce the required prior information at the receiver side, and their theoretical decoding performances are derived. Based on decoding performances and the required prior information for decoding, we discuss the preferred host domain and the preferred decoder for additive SS-based data hiding under different situations. Extensive simulations are conducted to illustrate the decoding performances of the presented decoders.

  10. Statistical coding and decoding of heartbeat intervals.

    Science.gov (United States)

    Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru

    2011-01-01

    The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  11. Rate Aware Instantly Decodable Network Codes

    KAUST Repository

    Douik, Ahmed

    2016-02-26

    This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.

  12. Statistical coding and decoding of heartbeat intervals.

    Directory of Open Access Journals (Sweden)

    Fausto Lucena

    Full Text Available The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  13. fNIRS-based online deception decoding

    Science.gov (United States)

    Hu, Xiao-Su; Hong, Keum-Shik; Ge, Shuzhi Sam

    2012-04-01

    Deception involves complex neural processes in the brain. Different techniques have been used to study and understand brain mechanisms during deception. Moreover, efforts have been made to develop schemes that can detect and differentiate deception and truth-telling. In this paper, a functional near-infrared spectroscopy (fNIRS)-based online brain deception decoding framework is developed. Deploying dual-wavelength fNIRS, we interrogate 16 locations in the forehead when eight able-bodied adults perform deception and truth-telling scenarios separately. By combining preprocessed oxy-hemoglobin and deoxy-hemoglobin signals, we develop subject-specific classifiers using the support vector machine. Deception and truth-telling states are classified correctly in seven out of eight subjects. A control experiment is also conducted to verify the deception-related hemodynamic response. The average classification accuracy is over 83.44% from these seven subjects. The obtained result suggests that the applicability of fNIRS as a brain imaging technique for online deception detection is very promising.

  14. Decoding reality the universe as quantum information

    CERN Document Server

    Vedral, Vlatko

    2010-01-01

    In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...

  15. Fast mental states decoding in mixed reality.

    Science.gov (United States)

    De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea

    2014-01-01

    The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.

  16. Infinity-Norm Sphere-Decoding

    CERN Document Server

    Seethaler, Dominik

    2008-01-01

    The most promising approaches for efficient detection in multiple-input multiple-output (MIMO) wireless systems are based on sphere-decoding (SD). The conventional (and optimum) norm that is used to conduct the tree traversal step in SD is the l-two norm. It was, however, recently shown that using the l-infinity norm instead significantly reduces the VLSI implementation complexity of SD at only a marginal performance loss. These savings are due to a reduction in the length of the critical path and the silicon area of the circuit, but also, as observed previously through simulation results, a consequence of a reduction in the computational (algorithmic) complexity. The aim of this paper is an analytical performance and computational complexity analysis of l-infinity norm SD. For i.i.d. Rayleigh fading MIMO channels, we show that l-infinity norm SD achieves full diversity order with an asymptotic SNR gap, compared to l-two norm SD, that increases at most linearly in the number of receive antennas. Moreover, we ...

  17. Cognitive Wyner Networks with Clustered Decoding

    CERN Document Server

    Lapidoth, Amos; Shamai, Shlomo; Wigger, Michele

    2012-01-01

    We study the uplink of linear cellular models featuring short range inter-cell interference. This means, we study $K$-transmitter/$K$-receiver interference networks where the transmitters lie on a line and the receivers on a parallel line; each receiver opposite its corresponding transmitter. We assume short-range interference by which we mean that the signal sent by a given transmitter is only interfered either by the signal sent by its left neighbor (we refer to this setup as the asymmetric network) or by the signals sent by both its neighbors (we refer to this setup as the symmetric network). We assume that each transmitter has side-information consisting of the messages of the $t_\\ell$ transmitters to its left and the $t_r$ transmitters to its right, and that each receiver can decode its message using the signals received at its own antenna, at the $r_\\ell$ receiving antennas to its left, and at the $r_r$ receiving antennas to its right. We provide upper and lower bounds on the multiplexing gain of these ...

  18. Why Hawking Radiation Cannot Be Decoded

    CERN Document Server

    Ong, Yen Chin; Chen, Pisin

    2014-01-01

    One of the great difficulties in the theory of black hole evaporation is that the most decisive phenomena tend to occur when the black hole is extremely hot: that is, when the physics is most poorly understood. Fortunately, a crucial step in the Harlow-Hayden approach to the firewall paradox, concerning the time available for decoding of Hawking radiation emanating from charged AdS black holes, can be made to work without relying on the unknown physics of black holes with extremely high temperatures; in fact, it relies on the properties of cold black holes. Here we clarify this surprising point. The approach is based on ideas borrowed from applications of the AdS/CFT correspondence to the quark-gluon plasma. Firewalls aside, our work presents a detailed analysis of the thermodynamics and evolution of evaporating charged AdS black holes with flat event horizons. We show that, in one way or another, these black holes are always eventually destroyed in a time which, while long by normal standards, is short relat...

  19. Fast mental states decoding in mixed reality.

    Directory of Open Access Journals (Sweden)

    Daniele eDe Massari

    2014-11-01

    Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.

  20. PERFORMANCE OF A NEW DECODING METHOD USED IN OPEN-LOOP ALL-OPTICAL CHAOTIC COMMUNICATION SYSTEM

    Institute of Scientific and Technical Information of China (English)

    Liu Huijie; Feng Jiuchao

    2011-01-01

    A new decoding method with decoder is used in open-loop all-optical chaotic communication system under strong injection condition.The performance of the new decoding method is numerically investigated by comparing it with the common decoding method without decoder.For new decoding method,two cases are analyzed,including whether or not the output of the decoder is adjusted by its input to receiver.The results indicate the decoding quality can be improved by adjusting for the new decoding method.Meanwhile,the injection strength of decoder can be restricted in a certain range.The adjusted new decoding method with decoder can achieve better decoding quality than decoding method without decoder when the bit rate of message is under 5 Gb/s.However,a stronger injection for receiver is needed.Moreover,the new decoding method can broaden the range of injection strength acceptable for good decoding quality.Different message encryption techniques are tested,and the result is similar to that of the common decoding method,indicative of the fact that the message encoded by using Chaotic Modulation (CM) can be best recovered by the new decoding method owning to the essence of this encryption technique.

  1. Reconfigurable and Parallelized Network Coding Decoder for VANETs

    Directory of Open Access Journals (Sweden)

    Sunwoo Kim

    2012-01-01

    Full Text Available Network coding is a promising technique for data communications in wired and wireless networks. However, it places an additional computing overhead on the receiving node in exchange for the improved bandwidth. This paper proposes an FPGA-based reconfigurable and parallelized network coding decoder for embedded systems especially for vehicular ad hoc networks. In our design, rapid decoding process can be achieved by exploiting parallelism in the coefficient vector operations. The proposed decoder is implemented by using a modern Xilinx Virtex-5 device and its performance is evaluated considering the performance of the software decoding on various embedded processors. The performance on four different sizes of the coefficient matrix is measured and the decoding throughput of 18.3 Mbps for the size 16 × 16 and 6.5 Mbps for 128 × 128 has been achieved at the operating frequency of 64.5 MHz. Compared to the recent TEGRA 250 processor, the result obtained with128 × 128 coefficient matrix reaches up to 5.06 in terms of speedup.

  2. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  3. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  4. Performance evaluation of H.264 decoder on different processors

    Directory of Open Access Journals (Sweden)

    H.S.Prasantha

    2010-08-01

    Full Text Available H.264/AVC (Advanced Video Coding is the newest video coding standard of the moving video coding experts group. The decoder is standardized by imposing restrictions on the bit stream and syntax, and defining the process of decoding syntax elements such that every decoder conforming to the standard will produce similar output when encoded bit stream is provided as input. It uses state of art coding tools and provides enhanced coding efficiency for a wide range of applications, including video telephony, real-time video conferencing, direct-broadcast TV (television, blue-ray disc, DVB (Digital video broadcast broadcast, streaming video and others. The paper proposes to port the H.264/AVC decoder on the various processors such as TI DSP (Digital signal processor, ARM (Advanced risk machines and P4 (Pentium processors. The paper also proposesto analyze and compare Video Quality Metrics for different encoded video sequences. The paper proposes to investigate the decoder performance on different processors with and without deblocking filter and compare the performance based on different video quality measures.

  5. Measuring Integrated Information from the Decoding Perspective.

    Directory of Open Access Journals (Sweden)

    Masafumi Oizumi

    2016-01-01

    Full Text Available Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information or when the system comprises independent parts (no integration. The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

  6. Measuring Integrated Information from the Decoding Perspective.

    Science.gov (United States)

    Oizumi, Masafumi; Amari, Shun-ichi; Yanagawa, Toru; Fujii, Naotaka; Tsuchiya, Naotsugu

    2016-01-01

    Accumulating evidence indicates that the capacity to integrate information in the brain is a prerequisite for consciousness. Integrated Information Theory (IIT) of consciousness provides a mathematical approach to quantifying the information integrated in a system, called integrated information, Φ. Integrated information is defined theoretically as the amount of information a system generates as a whole, above and beyond the amount of information its parts independently generate. IIT predicts that the amount of integrated information in the brain should reflect levels of consciousness. Empirical evaluation of this theory requires computing integrated information from neural data acquired from experiments, although difficulties with using the original measure Φ precludes such computations. Although some practical measures have been previously proposed, we found that these measures fail to satisfy the theoretical requirements as a measure of integrated information. Measures of integrated information should satisfy the lower and upper bounds as follows: The lower bound of integrated information should be 0 and is equal to 0 when the system does not generate information (no information) or when the system comprises independent parts (no integration). The upper bound of integrated information is the amount of information generated by the whole system. Here we derive the novel practical measure Φ* by introducing a concept of mismatched decoding developed from information theory. We show that Φ* is properly bounded from below and above, as required, as a measure of integrated information. We derive the analytical expression of Φ* under the Gaussian assumption, which makes it readily applicable to experimental data. Our novel measure Φ* can generally be used as a measure of integrated information in research on consciousness, and also as a tool for network analysis on diverse areas of biology.

  7. Kuiper Binary Object Formation

    CERN Document Server

    Nazzario, R C; Covington, C; Kagan, D; Hyde, T W

    2005-01-01

    It has been observed that binary Kuiper Belt Objects (KBOs) exist contrary to theoretical expectations. Their creation presents problems to most current models. However, the inclusion of a third body (for example, one of the outer planets) may provide the conditions necessary for the formation of these objects. The presence of a third massive body not only helps to clear the primordial Kuiper Belt but can also result in long lived binary Kuiper belt objects. The gravitational interaction between the KBOs and the third body causes one of four effects; scattering into the Oort cloud, collisions with the growing protoplanets, formation of binary pairs, or creation of a single Kuiper belt object. Additionally, the initial location of the progenitors of the Kuiper belt objects also has a significant effect on binary formation.

  8. Kuiper Binary Object Formation

    OpenAIRE

    Nazzario, R. C.; Orr, K.; Covington, C.; Kagan, D.; Hyde, T. W.

    2005-01-01

    It has been observed that binary Kuiper Belt Objects (KBOs) exist contrary to theoretical expectations. Their creation presents problems to most current models. However, the inclusion of a third body (for example, one of the outer planets) may provide the conditions necessary for the formation of these objects. The presence of a third massive body not only helps to clear the primordial Kuiper Belt but can also result in long lived binary Kuiper belt objects. The gravitational interaction betw...

  9. Binary Masking & Speech Intelligibility

    DEFF Research Database (Denmark)

    Boldt, Jesper

    The purpose of this thesis is to examine how binary masking can be used to increase intelligibility in situations where hearing impaired listeners have difficulties understanding what is being said. The major part of the experiments carried out in this thesis can be categorized as either experime...... mask using a directional system and a method for correcting errors in the target binary mask. The last part of the thesis, proposes a new method for objective evaluation of speech intelligibility....

  10. On the Optimality of Successive Decoding in Compress-and-Forward Relay Schemes

    CERN Document Server

    Wu, Xiugang

    2010-01-01

    In the classical compress-and-forward relay scheme developed by (Cover and El Gamal, 1979), the decoding process operates in a successive way: the destination first decodes the compressed observation of the relay, and then decodes the original message of the source. Recently, two modified compress-and-forward relay schemes were proposed, and in both of them, the destination jointly decodes the compressed observation of the relay and the original message, instead of successively. Such a modification on the decoding process was motivated by realizing that it is generally easier to decode the compressed observation jointly with the original message, and more importantly, the original message can be decoded even without completely decoding the compressed observation. However, the question remains whether this freedom of choosing a higher compression rate at the relay improves the achievable rate of the original message. It has been shown in (El Gamal and Kim, 2010) that the answer is negative in the single relay ...

  11. A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion

    Directory of Open Access Journals (Sweden)

    Johnny W. H. Kao

    2009-01-01

    Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.

  12. Map Algorithms for Decoding Linear Block codes Based on Sectionalized Trellis Diagrams

    Science.gov (United States)

    Lin, Shu

    1999-01-01

    The MAP algorithm is a trellis-based maximum a posteriori probability decoding algorithm. It is the heart of the turbo (or iterative) decoding which achieves an error performance near the Shannon limit. Unfortunately, the implementation of this algorithm requires large computation and storage. Furthermore, its forward and backward recursions result in long decoding delay. For practical applications, this decoding algorithm must be simplified and its decoding complexity and delay must be reduced. In this paper, the MAP algorithm and its variations, such as Log-MAP and Max-Log-MAP algorithms, are first applied to sectionalized trellises for linear block codes and carried out as two-stage decodings. Using the structural properties of properly sectionalized trellises, the decoding complexity and delay of the MAP algorithms can be reduced. Computation-wise optimum sectionalizations of a trellis for MAP algorithms are investigated. Also presented in this paper are bi-directional and parallel MAP decodings.

  13. A Concatenated ML Decoder for ST/SFBC-OFDM Systems in Double Selective Fading Channels

    Institute of Scientific and Technical Information of China (English)

    李明齐; 张文军

    2004-01-01

    This paper presented a concatenated maximum-likelihood (ML) decoder for space-time/space-frequency block coded orthogonal frequency diversion multiplexing (ST/SFBC-OFDM) systems in double selective fading channels. The proposed decoder first detects space-time or space-frequency codeword elements separately. Then, according to the coarsely estimated codeword elements, the ML decoding is performed in a smaller constellation element set to searching final codeword. It is proved that the proposed decoder has optimal performances if and only if subchannels are constant during a codeword interval. The simulation results show that the performances of proposed decoder is close to that of the optimal ML decoder in severe Doppler and delay spread channels. However, the complexity of proposed decoder is much lower than that of the optimal ML decoder.

  14. A Discrete Time Markov Chain Model for High Throughput Bidirectional Fano Decoders

    CERN Document Server

    Xu, Ran; Morris, Kevin; Kocak, Taskin

    2010-01-01

    The bidirectional Fano algorithm (BFA) can achieve at least two times decoding throughput compared to the conventional unidirectional Fano algorithm (UFA). In this paper, bidirectional Fano decoding is examined from the queuing theory perspective. A Discrete Time Markov Chain (DTMC) is employed to model the BFA decoder with a finite input buffer. The relationship between the input data rate, the input buffer size and the clock speed of the BFA decoder is established. The DTMC based modelling can be used in designing a high throughput parallel BFA decoding system. It is shown that there is a tradeoff between the number of BFA decoders and the input buffer size, and an optimal input buffer size can be chosen to minimize the hardware complexity for a target decoding throughput in designing a high throughput parallel BFA decoding system.

  15. Multistep Linear Programming Approaches for Decoding Low-Density Parity-Check Codes

    Institute of Scientific and Technical Information of China (English)

    LIU Haiyang; MA Lianrong; CHEN Jie

    2009-01-01

    The problem of improving the performance of linear programming (LP) decoding of low-density padty-check (LDPC) codes is considered in this paper. A multistep linear programming (MLP) algorithm was developed for decoding LDPC codes that includes a slight increase in computational complexity. The MLP decoder adaptively adds new constraints which are compatible with a selected check node to refine the re-sults when an error is reported by the odginal LP decoder. The MLP decoder result is shown to have the maximum-likelihood (ML) certificate property. Simulations with moderate block length LDPC codes suggest that the MLP decoder gives better performance than both the odginal LP decoder and the conventional sum-product (SP) decoder.

  16. Eclipsing Binary Pulsars

    CERN Document Server

    Freire, P C C

    2004-01-01

    The first eclipsing binary pulsar, PSR B1957+20, was discovered in 1987. Since then, 13 other eclipsing low-mass binary pulsars have been found, 12 of these are in globular clusters. In this paper we list the known eclipsing binary pulsars and their properties, with special attention to the eclipsing systems in 47 Tuc. We find that there are two fundamentally different groups of eclipsing binary pulsars; separated by their companion masses. The less massive systems (M_c ~ 0.02 M_sun) are a product of predictable stellar evolution in binary pulsars. The systems with more massive companions (M_c ~ 0.2 M_sun) were formed by exchange encounters in globular clusters, and for that reason are exclusive to those environments. This class of systems can be used to learn about the neutron star recycling fraction in the globular clusters actively forming pulsars. We suggest that most of these binary systems are undetectable at radio wavelengths.

  17. A Low Power Viterbi Decoder for Trellis Coded Modulation System

    Directory of Open Access Journals (Sweden)

    M. Jansi Rani

    2014-02-01

    Full Text Available Forward Error Correction (FEC schemes are an essential component of wireless communication systems. Convolutional codes are employed to implement FEC but the complexity of corresponding decoders increases exponentially according to the constraint length. Present wireless standards such as Third generation (3G systems, GSM, 802.11A, 802.16 utilize some configuration of convolutional coding. Convolutional encoding with Viterbi decoding is a powerful method for forward error correction. Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. The main aim of this project is to design FPGA based Viterbi algorithm which encrypts / decrypts the data. In this project the encryption / decryption algorithm is designed and programmed in to the FPGA.

  18. Analysis of Minimal LDPC Decoder System on a Chip Implementation

    Directory of Open Access Journals (Sweden)

    T. Palenik

    2015-09-01

    Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.

  19. SERS decoding of micro gold shells moving in microfluidic systems.

    Science.gov (United States)

    Lee, Saram; Joo, Segyeong; Park, Sejin; Kim, Soyoun; Kim, Hee Chan; Chung, Taek Dong

    2010-05-01

    In this study, in situ surface-enhanced Raman scattering (SERS) decoding was demonstrated in microfluidic chips using novel thin micro gold shells modified with Raman tags. The micro gold shells were fabricated using electroless gold plating on PMMA beads with diameter of 15 microm. These shells were sophisticatedly optimized to produce the maximum SERS intensity, which minimized the exposure time for quick and safe decoding. The shell surfaces produced well-defined SERS spectra even at an extremely short exposure time, 1 ms, for a single micro gold shell combined with Raman tags such as 2-naphthalenethiol and benzenethiol. The consecutive SERS spectra from a variety of combinations of Raman tags were successfully acquired from the micro gold shells moving in 25 microm deep and 75 microm wide channels on a glass microfluidic chip. The proposed functionalized micro gold shells exhibited the potential of an on-chip microfluidic SERS decoding strategy for micro suspension array.

  20. Modified Suboptimal Iterative Decoding for Regular Repeat- Accumulate Coded Signals

    Directory of Open Access Journals (Sweden)

    Muhammad Thamer Nesr

    2017-05-01

    Full Text Available In this work, two algorithms are suggested in order to improve the performance of systematic Repeat-Accumulate ( decoding. The first one is accomplished by the insertion of pilot symbols among the data stream that entering the encoder. The positions where pilots should be inserted are chosen in such a way that to improve the minimum Hamming distance and/or to reduce the error coefficients of the code. The second proposed algorithm includes the utilization of the inserted pilots to estimate scaling (correction factors. Two-dimensional correction factor was suggested in order to enhance the performance of traditional Minimum-Sum decoding of regular repeat accumulate codes. An adaptive method can be achieved for getting the correction factors by calculating the mean square difference between the values of received pilots and the a-posteriori data of bit and check node related to them which created by the minimum-sum ( decoder

  1. Joint scheduling and resource allocation for multiple video decoding tasks

    Science.gov (United States)

    Foo, Brian; van der Schaar, Mihaela

    2008-01-01

    In this paper we propose a joint resource allocation and scheduling algorithm for video decoding on a resource-constrained system. By decomposing a multimedia task into decoding jobs using quality-driven priority classes, we demonstrate using queuing theoretic analysis that significant power savings can be achieved under small video quality degradation without requiring the encoder to adapt its transmitted bitstream. Based on this scheduling algorithm, we propose an algorithm for maximizing the sum of video qualities in a multiple task environment, while minimizing system energy consumption, without requiring tasks to reveal information about their performances to the system or to other potentially exploitative applications. Importantly, we offer a method to optimize the performance of multiple video decoding tasks on an energy-constrained system, while protecting private information about the system and the applications.

  2. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  3. Algebraic Fast-Decodable Relay Codes for Distributed Communications

    CERN Document Server

    Hollanti, Camilla

    2012-01-01

    In this paper, fast-decodable lattice code constructions are designed for the nonorthogonal amplify-and-forward (NAF) multiple-input multiple-output (MIMO) channel. The constructions are based on different types of algebraic structures, e.g. quaternion division algebras. When satisfying certain properties, these algebras provide us with codes whose structure naturally reduces the decoding complexity. The complexity can be further reduced by shortening the block length, i.e., by considering rectangular codes called less than minimum delay (LMD) codes.

  4. New Iterated Decoding Algorithm Based on Differential Frequency Hopping System

    Institute of Scientific and Technical Information of China (English)

    LIANG Fu-lin; LUO Wei-xiong

    2005-01-01

    A new iterated decoding algorithm is proposed for differential frequency hopping (DFH) encoder concatenated with multi-frequency shift-key (MFSK) modulator. According to the character of the frequency hopping (FH) pattern trellis produced by DFH function, maximum a posteriori (MAP) probability theory is applied to realize the iterate decoding of it. Further, the initial conditions for the new iterate algorithm based on MAP algorithm are modified for better performance. Finally, the simulation result compared with that from traditional algorithms shows good anti-interference performance.

  5. Joint Estimation and Decoding of Space-Time Trellis Codes

    Directory of Open Access Journals (Sweden)

    Zhang Jianqiu

    2002-01-01

    Full Text Available We explore the possibility of using an emerging tool in statistical signal processing, sequential importance sampling (SIS, for joint estimation and decoding of space-time trellis codes (STTC. First, we provide background on SIS, and then we discuss its application to space-time trellis code (STTC systems. It is shown through simulations that SIS is suitable for joint estimation and decoding of STTC with time-varying flat-fading channels when phase ambiguity is avoided. We used a design criterion for STTCs and temporally correlated channels that combats phase ambiguity without pilot signaling. We have shown by simulations that the design is valid.

  6. Decoding Brain States Based on Magnetoencephalography From Prespecified Cortical Regions.

    Science.gov (United States)

    Zhang, Jinyin; Li, Xin; Foldes, Stephen T; Wang, Wei; Collinger, Jennifer L; Weber, Douglas J; Bagić, Anto

    2016-01-01

    Brain state decoding based on whole-head MEG has been extensively studied over the past decade. Recent MEG applications pose an emerging need of decoding brain states based on MEG signals originating from prespecified cortical regions. Toward this goal, we propose a novel region-of-interest-constrained discriminant analysis algorithm (RDA) in this paper. RDA integrates linear classification and beamspace transformation into a unified framework by formulating a constrained optimization problem. Our experimental results based on human subjects demonstrate that RDA can efficiently extract the discriminant pattern from prespecified cortical regions to accurately distinguish different brain states.

  7. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    Science.gov (United States)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by

  8. Conventional Tanner Graph for Recursive onvolutional Codes and Associated Decoding

    Institute of Scientific and Technical Information of China (English)

    SUN Hong

    2001-01-01

    A different representation of recur-sive systematic convolutional (RSC) codes is pro-posed. This representation can be realized by a con-ventional Tanner graph. The graph becomes a treeby introducing hidden edge. It is shown that thesum-product algorithm applied to this graph modelis equivalent to the BCJR algorithm for turbo de-coding with lower computational complexity. Themessage-passing chain of the BCJR algorithm is pre-sented more exactly in the graph. In addition, theproposed representation of RSC codes provides an ef-ficient method to set up the trellis and the conven-tional Tanner graph for RSC codes provides directlythe architecture for decoding.

  9. Optimal encoding and decoding of a spin direction

    CERN Document Server

    Bagán, E; Brey, A; Muñoz-Tàpia, R; Tarrach, Rolf

    2001-01-01

    For a system of N spins 1/2 there are quantum states that can encode a direction in an intrinsic way. Information on this direction can later be decoded by means of a quantum measurement. We present here the optimal encoding and decoding procedure using the fidelity as a figure of merit. We compute the maximal fidelity and prove that it is directly related to the largest zeroes of the Legendre and Jacobi polynomials. We show that this maximal fidelity approaches unity quadratically in 1/N. We also discuss this result in terms of the dimension of the encoding Hilbert space.

  10. PERFORMANCE OF THREE STAGE TURBO-EQUALIZATION-DECODING

    Institute of Scientific and Technical Information of China (English)

    Kazi Takpaya

    2003-01-01

    An increasing demand for high data rate transmission and protection over bandlimited channels with severe inter-symbol interference has resulted in a flurry of activity to improve channel equalization, In conjunction with equalization, channel coding-decoding can be employed to improve system performance. In this letter, the performance of the three stage turbo-equalization-decoding employing log maximum a posteriori probability is experimentally evaluated by a fading simulator. The BER is evaluated using various information sequence and interleaver sizes taking into account that the communication medium is a noisy inter symbol interference channel.

  11. Joint source/channel iterative arithmetic decoding with JPEG 2000 image transmission application

    Science.gov (United States)

    Zaibi, Sonia; Zribi, Amin; Pyndiah, Ramesh; Aloui, Nadia

    2012-12-01

    Motivated by recent results in Joint Source/Channel coding and decoding, we consider the decoding problem of Arithmetic Codes (AC). In fact, in this article we provide different approaches which allow one to unify the arithmetic decoding and error correction tasks. A novel length-constrained arithmetic decoding algorithm based on Maximum A Posteriori sequence estimation is proposed. The latter is based on soft-input decoding using a priori knowledge of the source-symbol sequence and the compressed bit-stream lengths. Performance in the case of transmission over an Additive White Gaussian Noise channel is evaluated in terms of Packet Error Rate. Simulation results show that the proposed decoding algorithm leads to significant performance gain while exhibiting very low complexity. The proposed soft input arithmetic decoder can also generate additional information regarding the reliability of the compressed bit-stream components. We consider the serial concatenation of the AC with a Recursive Systematic Convolutional Code, and perform iterative decoding. We show that, compared to tandem and to trellis-based Soft-Input Soft-Output decoding schemes, the proposed decoder exhibits the best performance/complexity tradeoff. Finally, the practical relevance of the presented iterative decoding system is validated under an image transmission scheme based on the JPEG 2000 standard and excellent results in terms of decoded image quality are obtained.

  12. 47 CFR 11.12 - Two-tone Attention Signal encoder and decoder.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Two-tone Attention Signal encoder and decoder... SYSTEM (EAS) General § 11.12 Two-tone Attention Signal encoder and decoder. Existing two-tone Attention... Attention Signal decoder will no longer be required and the two-tone Attention Signal will be used...

  13. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  14. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  15. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  16. Homogeneous Interpolation Problem and Key Equation for Decoding Reed-Solomon Codes

    Institute of Scientific and Technical Information of China (English)

    忻鼎稼

    1994-01-01

    The concept of homogeneous interpolation problem (HIP) over fields is introduced.It is discovered that solving HIP over finite fields is equivalent to decoding Reed-Solomon (RS) codes.The Welch-Berlekamp algorithm of decoding RS codes is derived;besides,by introducing the concept of incomplete locator of error patterns,the algorithm called incomplete iterative decoding is established.

  17. TC81220F (HAWK) MPEG 2 system decoder LSI; MPEG2 system decoder LSI TC81220F (HAWK)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    Satellite broadcasting, cable broadcasting, and ground wave broadcasting have been digitized gradually. In the video and audio data communication, the MPEG2 (Moving Picture Experts Group 2) technology for compression and extension is important. A system LSI that includes an MPEG2 decoder in a receiving set (Set Top Box) is required to each broadcasting. To deduce the system cost, Toshiba developed a TX3904 MCU (microcontroller) that controls the system, a transport processor that selects the packet-multiplexed data, and TC81220F (HAWK) that puts MPEG2 audio video decoders in one chip. (translated by NEDO)

  18. Binary Neutron Star Mergers

    Directory of Open Access Journals (Sweden)

    Joshua A. Faber

    2012-07-01

    Full Text Available We review the current status of studies of the coalescence of binary neutron star systems. We begin with a discussion of the formation channels of merging binaries and we discuss the most recent theoretical predictions for merger rates. Next, we turn to the quasi-equilibrium formalisms that are used to study binaries prior to the merger phase and to generate initial data for fully dynamical simulations. The quasi-equilibrium approximation has played a key role in developing our understanding of the physics of binary coalescence and, in particular, of the orbital instability processes that can drive binaries to merger at the end of their lifetimes. We then turn to the numerical techniques used in dynamical simulations, including relativistic formalisms, (magneto-hydrodynamics, gravitational-wave extraction techniques, and nuclear microphysics treatments. This is followed by a summary of the simulations performed across the field to date, including the most recent results from both fully relativistic and microphysically detailed simulations. Finally, we discuss the likely directions for the field as we transition from the first to the second generation of gravitational-wave interferometers and while supercomputers reach the petascale frontier.

  19. Skewed Binary Search Trees

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel

    2006-01-01

    It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree...... can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees....... For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each...

  20. Polar Coding with CRC-Aided List Decoding

    Science.gov (United States)

    2015-08-01

    decoder estimates û1, . . . , ûN , one at a time, in order. For conve- nience, we introduce some non-standard notation. For any k ≤ N and any sequence of... recursive formula, but the true probability cannot be computed efficiently. The estimate differs from the true probability because the recursive formula

  1. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  2. Complete ML Decoding orf the (73,45) PG Code

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Hjaltason, Johan

    2005-01-01

    Our recent proof of the completeness of decoding by list bit flipping is reviewed. The proof is based on an enumeration of all cosets of low weight in terms of their minimum weight and syndrome weight. By using a geometric description of the error patterns we characterize all remaining cosets....

  3. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven differe

  4. Spread codes and spread decoding in network coding

    OpenAIRE

    Manganiello, F; Gorla, E.; Rosenthal, J.

    2008-01-01

    In this paper we introduce the class of spread codes for the use in random network coding. Spread codes are based on the construction of spreads in finite projective geometry. The major contribution of the paper is an efficient decoding algorithm of spread codes up to half the minimum distance.

  5. The Effectiveness of Dictionary Examples in Decoding: The Case of ...

    African Journals Online (AJOL)

    rbr

    COBUILD English Language Dictionary, relies entirely on a corpus for its exam- ... corpus examples, while intermediate learners can learn from 'controlled' exam- ... learners' dictionaries, a possible indication of their difficulties with them. But .... tionary are mostly used for decoding rather than encoding linguistic activities.

  6. Relationships between grammatical encoding and decoding : an experimental psycholinguistic study

    NARCIS (Netherlands)

    Olsthoorn, Nomi Maria

    2007-01-01

    Although usually considered distinct processes, grammatical encoding and decoding have many theoretical and empirical commonalities. In two series of experiments relationships between the two processes are explored. The first series uses a dual task (edited reading aloud (ERA)) paradigm to test the

  7. Phonological Awareness and Decoding Skills in Deaf Adolescents

    Science.gov (United States)

    Gravenstede, L.

    2009-01-01

    This study investigated the phonological awareness skills of a group of deaf adolescents and how these skills correlated with decoding skills (single word and non-word reading) and receptive vocabulary. Twenty, congenitally profoundly deaf adolescents with at least average nonverbal cognitive skills were tested on a range of phonological awareness…

  8. Real Time Decoding of Color Symbol for Optical Positioning System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2015-01-01

    Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex back‐ grounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

  9. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

    OpenAIRE

    Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi

    2007-01-01

    A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.

  10. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.; Blokland, Y.M.; Sadakata, M.; Desain, P.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven differe

  11. Real Time Decoding of Color Symbol for Optical Positioning System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2015-01-01

    Full Text Available This paper presents the design and real-time decoding of a color symbol that can be used as a reference marker for optical navigation. The designed symbol has a circular shape and is printed on paper using two distinct colors. This pair of colors is selected based on the highest achievable signal to noise ratio. The symbol is designed to carry eight bit information. Real time decoding of this symbol is performed using a heterogeneous combination of Field Programmable Gate Array (FPGA and a microcontroller. An image sensor having a resolution of 1600 by 1200 pixels is used to capture images of symbols in complex backgrounds. Dynamic image segmentation, component labeling and feature extraction was performed on the FPGA. The region of interest was further computed from the extracted features. Feature data belonging to the symbol was sent from the FPGA to the microcontroller. Image processing tasks are partitioned between the FPGA and microcontroller based on data intensity. Experiments were performed to verify the rotational independence of the symbols. The maximum distance between camera and symbol allowing for correct detection and decoding was analyzed. Experiments were also performed to analyze the number of generated image components and sub-pixel precision versus different light sources and intensities. The proposed hardware architecture can process up to 55 frames per second for accurate detection and decoding of symbols at two Megapixels resolution. The power consumption of the complete system is 342mw.

  12. Decoding a combined amplitude modulated and frequency modulated signal

    DEFF Research Database (Denmark)

    2015-01-01

    The present disclosure relates to a method for decoding a combined AM/FM encoded signal, comprising the steps of: combining said encoded optical signal with light from a local oscillator configured with a local oscillator frequency; converting the combined local oscillator and encoded optical sig...

  13. Method for Veterbi decoding of large constraint length convolutional codes

    Science.gov (United States)

    Hsu, In-Shek; Truong, Trieu-Kie; Reed, Irving S.; Jing, Sun

    1988-05-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  14. Name that tune: Decoding music from the listening brain

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.D.R.; Blokland, Y.M.; Sadakata, M.; Desain, P.W.M.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  15. Name that tune: decoding music from the listening brain.

    NARCIS (Netherlands)

    Schaefer, R.S.; Farquhar, J.; Blokland, Y.M.; Sadakata, M.; Desain, P.

    2011-01-01

    In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven

  16. Error Locked Encoder and Decoder for Nanomemory Application

    Directory of Open Access Journals (Sweden)

    Y. Sharath

    2014-03-01

    Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.

  17. Decoding the Disciplines: An Approach to Scientific Thinking

    Science.gov (United States)

    Pinnow, Eleni

    2016-01-01

    The Decoding the Disciplines methodology aims to teach students to think like experts in discipline-specific tasks. The central aspect of the methodology is to identify a bottleneck in the course content: a particular topic that a substantial number of students struggle to master. The current study compared the efficacy of standard lecture and…

  18. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...

  19. The Fluid Reading Primer: Animated Decoding Support for Emergent Readers.

    Science.gov (United States)

    Zellweger, Polle T.; Mackinlay, Jock D.

    A prototype application called the Fluid Reading Primer was developed to help emergent readers with the process of decoding written words into their spoken forms. The Fluid Reading Primer is part of a larger research project called Fluid Documents, which is exploring the use of interactive animation of typography to show additional information in…

  20. A Novel Decoder for Unknown Diversity Channels Employing Space-Time Codes

    Directory of Open Access Journals (Sweden)

    Erez Elona

    2002-01-01

    Full Text Available We suggest new decoding techniques for diversity channels employing space time codes (STC when the channel coefficients are unknown to both transmitter and receiver. Most of the existing decoders for unknown diversity channels employ training sequence in order to estimate the channel. These decoders use the estimates of the channel coefficients in order to perform maximum likelihood (ML decoding. We suggest an efficient implementation of the generalized likelihood ratio test (GLRT algorithm that improves the performance with only slight increase in complexity. We also suggest an energy weighted decoder (EWD that shows additional improvement without further increase in the computational complexity.

  1. On Pseudocodewords and Decision Regions of Linear Programming Decoding of HDPC Codes

    CERN Document Server

    Lifshitz, Asi

    2011-01-01

    In this paper we explore the decision regions of Linear Programming (LP) decoding. We compare the decision regions of an LP decoder, a Belief Propagation (BP) decoder and the optimal Maximum Likelihood (ML) decoder. We study the effect of minimal-weight pseudocodewords on LP decoding. We present global optimization as a method for finding the minimal pseudoweight of a given code as well as the number of minimal-weight generators. We present a complete pseudoweight distribution for the [24; 12; 8] extended Golay code, and provide justifications of why the pseudoweight distribution alone cannot be used for obtaining a tight upper bound on the error probability.

  2. New concatenated soft decoding of Reed-Solomon codes with lower complexities

    Institute of Scientific and Technical Information of China (English)

    BIAN Yin-bing; FENG Guang-zeng

    2009-01-01

    To improve error-correcting performance, an iterative concatenated soft decoding algorithm for Reed-Solomon (RS) codes is presented in this article. This algorithm brings both complexity as well as advantages in performance over presently popular sott decoding algorithms. The proposed algorithm consists of two powerful soft decoding techniques, adaptive belief propagation (ABP) and box and match algorithm (BMA), which are serially concatenated by the accumulated log-likelihood ratio (ALLR).Simulation results show that, compared with ABP and ABP-BMA algorithms, the proposed algorithm can bring more decoding gains and a better tradeoffbetween the decoding performance and complexity.

  3. Binary Popldation Synthcsis Study

    Institute of Scientific and Technical Information of China (English)

    HAN Zhanwen

    2011-01-01

    Binary population synthesis (BPS), an approach to evolving millions of stars (including binaries) simultaneously, plays a crucial role in our understanding of stellar physics, the structure and evolution of galaxies, and cosmology. We proposed and developed a BPS approach, and used it to investigate the formation of many peculiar stars such as hot subdwarf stars, progenitors of type la supernovae, barium stars, CH stars, planetary nebulae, double white dwarfs, blue stragglers, contact binaries, etc. We also established an evolution population synthesis (EPS) model, the Yunnan Model, which takes into account binary interactions for the first time. We applied our model for the origin of hot subdwarf stars in the study of elliptical galaxies and explained their far-UV radiation.

  4. Binary and Millisecond Pulsars

    Directory of Open Access Journals (Sweden)

    Lorimer Duncan R.

    2008-11-01

    Full Text Available We review the main properties, demographics and applications of binary and millisecond radio pulsars. Our knowledge of these exciting objects has greatly increased in recent years, mainly due to successful surveys which have brought the known pulsar population to over 1800. There are now 83 binary and millisecond pulsars associated with the disk of our Galaxy, and a further 140 pulsars in 26 of the Galactic globular clusters. Recent highlights include the discovery of the young relativistic binary system PSR J1906+0746, a rejuvination in globular cluster pulsar research including growing numbers of pulsars with masses in excess of 1.5M_⊙, a precise measurement of relativistic spin precession in the double pulsar system and a Galactic millisecond pulsar in an eccentric (e = 0.44 orbit around an unevolved companion.

  5. Embedding intensity image in grid-cross down-sampling (GCD) binary holograms based on block truncation coding

    Science.gov (United States)

    Tsang, P. W. M.; Poon, T.-C.; Jiao, A. S. M.

    2013-09-01

    Past research has demonstrated that a three-dimensional (3D) intensity image can be preserved to a reasonable extent with a binary Fresnel hologram called the grid-cross down-sampling (GCD) binary hologram, if the intensity image is first down-sampled with a grid-cross lattice prior to the generation of the hologram. It has also been shown that the binary hologram generated with such means can be embedded with a binary image without causing observable artifact on the reconstructed image. Hence, the method can be further extended to embed an intensity image by binarizing it with error diffusion. Despite the favorable findings, the visual quality of the retrieved embedded intensity image from the hologram is rather poor. In this paper, we propose a method to overcome this problem. First, we employ the block truncation coding (BTC) to convert the intensity image into a binary bit stream. Next, the binary bit stream is embedded into the GCD binary hologram. The embedded image can be recovered with a BTC decoder, as well as a noise suppression scheme if the hologram is partially damaged. Experimental results demonstrate that with our proposed method, the visual quality of the embedded intensity image is superior to that of the existing approach, and the extracted image preserves favorably even if the binary hologram is damaged and contaminated with noise.

  6. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  7. Sums of Spike Waveform Features for Motor Decoding

    Directory of Open Access Journals (Sweden)

    Jie Li

    2017-07-01

    Full Text Available Traditionally, the key step before decoding motor intentions from cortical recordings is spike sorting, the process of identifying which neuron was responsible for an action potential. Recently, researchers have started investigating approaches to decoding which omit the spike sorting step, by directly using information about action potentials' waveform shapes in the decoder, though this approach is not yet widespread. Particularly, one recent approach involves computing the moments of waveform features and using these moment values as inputs to decoders. This computationally inexpensive approach was shown to be comparable in accuracy to traditional spike sorting. In this study, we use offline data recorded from two Rhesus monkeys to further validate this approach. We also modify this approach by using sums of exponentiated features of spikes, rather than moments. Our results show that using waveform feature sums facilitates significantly higher hand movement reconstruction accuracy than using waveform feature moments, though the magnitudes of differences are small. We find that using the sums of one simple feature, the spike amplitude, allows better offline decoding accuracy than traditional spike sorting by expert (correlation of 0.767, 0.785 vs. 0.744, 0.738, respectively, for two monkeys, average 16% reduction in mean-squared-error, as well as unsorted threshold crossings (0.746, 0.776; average 9% reduction in mean-squared-error. Our results suggest that the sums-of-features framework has potential as an alternative to both spike sorting and using unsorted threshold crossings, if developed further. Also, we present data comparing sorted vs. unsorted spike counts in terms of offline decoding accuracy. Traditional sorted spike counts do not include waveforms that do not match any template (“hash”, but threshold crossing counts do include this hash. On our data and in previous work, hash contributes to decoding accuracy. Thus, using the

  8. Fast multiple run_before decoding method for efficient implementation of an H.264/advanced video coding context-adaptive variable length coding decoder

    Science.gov (United States)

    Ki, Dae Wook; Kim, Jae Ho

    2013-07-01

    We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.

  9. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  10. A Generalization Belief Propagation Decoding Algorithm for Polar Codes Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a generalization belief propagation (BP decoding algorithm based on particle swarm optimization (PSO to improve the performance of the polar codes. Through the analysis of the existing BP decoding algorithm, we first introduce a probability modifying factor to each node of the BP decoder, so as to enhance the error correcting capacity of the decoding. Then, we generalize the BP decoding algorithm based on these modifying factors and drive the probability update equations for the proposed decoding. Based on the new probability update equations, we show the intrinsic relationship of the existing decoding algorithms. Finally, in order to achieve the best performance, we formulate an optimization problem to find the optimal probability modifying factors for the proposed decoding algorithm. Furthermore, a method based on the modified PSO algorithm is also introduced to solve that optimization problem. Numerical results show that the proposed generalization BP decoding algorithm achieves better performance than that of the existing BP decoding, which suggests the effectiveness of the proposed decoding algorithm.

  11. Decoding bipedal locomotion from the rat sensorimotor cortex

    Science.gov (United States)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  12. NEW ITERATIVE SUPER-TRELLIS DECODING WITH SOURCE A PRIORI INFORMATION FOR VLCS WITH TURBO CODES

    Institute of Scientific and Technical Information of China (English)

    Liu Jianjun; Tu Guofang; Wu Weiren

    2007-01-01

    A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simulation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs), when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.

  13. Iterative decoding of Generalized Parallel Concatenated Block codes using cyclic permutations

    Directory of Open Access Journals (Sweden)

    Hamid Allouch

    2012-09-01

    Full Text Available Iterative decoding techniques have gain popularity due to their performance and their application in most communications systems. In this paper, we present a new application of our iterative decoder on the GPCB (Generalized Parallel Concatenated Block codes which uses cyclic permutations. We introduce a new variant of the component decoder. After extensive simulation; the obtained result is very promising compared with several existing methods. We evaluate the effects of various parameters component codes, interleaver size, block size, and the number of iterations. Three interesting results are obtained; the first one is that the performances in terms of BER (Bit Error Rate of the new constituent decoder are relatively similar to that of original one. Secondly our turbo decoding outperforms another turbo decoder for some linear block codes. Thirdly the proposed iterative decoding of GPCB-BCH (75, 51 is about 2.1dB from its Shannon limit.

  14. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    Science.gov (United States)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  16. Binary random systematic erasure code for RAID system

    Science.gov (United States)

    Teng, Pengguo; Wang, Xiaojing; Chen, Liang; Yuan, Dezhai

    2017-03-01

    As the increasing expansion of data scale, storage systems grow in size and complexity, the requirements for systems scalability and methodologies to recover simultaneous disk and sector failures are inevitable. To ensure high reliability and flexible scalability, erasure codes with high fault tolerance and flexibility are required. In this pa per, we present a class of erasure codes satisfied the previous requirements, which referred as Binary Random Systematic erasure code, called BRS code for short. BRS code constructs its generator matrix based on random matrix, whose elements are in Galois Field GF (2), and takes the advantage of exclusive-or (XOR) operations to make it work much fast. It is designed as a systematic code to facilitate the store and recovery. Moreover, δ random redundancies make the probability of successfully decoding controllable. Our evaluations and experiments show that BRS code is flexible on parameters and fault tolerance setting, and has high computing efficiency on encoding and decoding speeds, what is more, when the code length is long enough, BRS code is approximately MDS, thus make it have nearly optimal storage efficiency.

  17. Eclipsing Binary Update, No. 2.

    Science.gov (United States)

    Williams, D. B.

    1996-01-01

    Contents: 1. Wrong again! The elusive period of DHK 41. 2. Stars observed and not observed. 3. Eclipsing binary chart information. 4. Eclipsing binary news and notes. 5. A note on SS Arietis. 6. Featured star: TX Ursae Majoris.

  18. N-Bit Binary Resistor

    Science.gov (United States)

    Tcheng, Ping

    1989-01-01

    Binary resistors in series tailored to precise value of resistance. Desired value of resistance obtained by cutting appropriate traces across resistors. Multibit, binary-based, adjustable resistor with high resolution used in many applications where precise resistance required.

  19. Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor V

    2011-01-01

    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.

  20. Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.

    Science.gov (United States)

    Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup

    2011-09-01

    The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.